- Getting Started
- Using Hevo
- Familiarizing with the Pipelines UI
- Pipeline Modes
- Types of Data Synchronization
- Pipeline Objects
- Data Ingestion
- Handling of Updates
- Data Loss Prevention
- Parsing Nested JSON Fields in Events
- Table and Column Name Compression
- Hevo-generated Metadata
- Working with Pipelines
- Python Code-Based Transformations
- Drag and Drop Transformations
- Transformation Reference
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Creating File Partitions for S3 Destination through Schema Mapper
- Schema Mapper Compatibility Table
- Free Sources
- Cloud Applications
- Amazon DynamoDB
- Amazon Redshift
- SQL Server
- File Storage
- Marketing & Email
- Sdk & Streaming
- Familiarizing with the Destinations UI
- Name Sanitization
- Amazon Redshift
- Google BigQuery
- Hevo Managed Google BigQuery
- Loading Data to a Data Warehouse
- Manually Triggering the Loading of Events
- Concepts and Reference
- Personal Settings
- Team Settings
- About Hevo
- Release Version 1.60 (06-Apr-2021)
- Release Version 1.59 (23-Mar-2021)
- Release Version 1.58 (09-Mar-2021)
- Release Version 1.57 (22-Feb-2021)
- Release Version 1.56 (09-Feb-2021)
- Release Version 1.55 (25-Jan-2021)
- Release Version 1.54 (12-Jan-2021)
- Release Version 1.53 (22-Dec-2020)
- Release Version 1.52 (03-Dec-2020)
- Release Version 1.51 (10-Nov-2020)
- Release Version 1.50 (19-Oct-2020)
- Release Version 1.49 (28-Sep-2020)
- Release Version 1.48 (01-Sep-2020)
- Release Version 1.47 (06-Aug-2020)
- Release Version 1.46 (21-Jul-2020)
- Release Version 1.45 (02-Jul-2020)
- Release Version 1.44 (11-Jun-2020)
- Release Version 1.43 (15-May-2020)
- Release Version 1.42 (30-Apr-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
A log-based Pipeline moves data from a Source to a Destination by trimming through logs which consist of Events describing changes made to a database, wherein data is ingested at a fixed interval.
These logs are generally maintained for replication or recovery of data.
You must select the appropriate Pipeline mode depending on the Source type to create a log-based Pipeline:
- A MySQL Source configured using BinLog mode.
- A PostgreSQL Source configured using Logical replication mode.
- An Amazon DynamoDB Source using Streams.
- A MongoDB Source configured using either Change Stream or OpLog mode.
- An Amazon Aurora Source configured using BinLog mode.
- A SQL Server with individual jobs configured with `Change Tracking` as the query mode or the Change Tracking Pipeline mode.
- An Oracle Source configured using Redo Logs mode.
Was this page helpful?
Thank you for helping improve Hevo's documentation. If you need help or have any questions, please consider contacting support.