- Getting Started
- Using Hevo
- Familiarizing with the Pipelines UI
- Pipeline Modes
- Types of Data Synchronization
- Pipeline Objects
- Data Ingestion Statuses
- Handling of Updates
- Data Loss Prevention
- Introduction to __hevo_id
- Parsing Nested JSON Fields in Events
- Table and Column Name Compression
- Working with Pipelines
- Python Code-Based Transformations
- Drag and Drop Transformations
- Transformation Reference
- Using Schema Mapper
- Mapping Statuses
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Resizing String Columns in the Destination
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Bulk Actions in Schema Mapper
- Auto Mapping Event Types
- Creating File Partitions for S3 Destination through Schema Mapper
- Schema Mapper Compatibility Table
- Cloud Applications
- Amazon DynamoDB
- Amazon Redshift
- MS SQL
- File Storage
- Marketing & Email
- Sdk & Streaming
- Familiarizing with the Destinations UI
- Name Sanitization
- Amazon Redshift
- Google BigQuery
- Hevo Managed Google BigQuery
- Loading Data to a Data Warehouse
- File storage
- Concepts and Reference
- Personal Settings
- Team Settings
- About Hevo
- Release Version 1.54 (12-Jan-2021)
- Release Version 1.53 (22-Dec-2020)
- Release Version 1.52 (03-Dec-2020)
- Release Version 1.51 (10-Nov-2020)
- Release Version 1.50 (19-Oct-2020)
- Release Version 1.49 (28-Sep-2020)
- Release Version 1.48 (01-Sep-2020)
- Release Version 1.47 (06-Aug-2020)
- Release Version 1.46 (21-Jul-2020)
- Release Version 1.45 (02-Jul-2020)
- Release Version 1.44 (11-Jun-2020)
- Release Version 1.43 (15-May-2020)
- Release Version 1.42 (30-Apr-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
A log based Pipeline moves data from a Source to a Destination by trimming through logs which consist of Events describing changes made to a database, wherein data is ingested at a fixed interval.
These logs are generally maintained for replication or recovery of data.
Hevo supports a variety of log-based Pipelines as listed below:
- A MySQL Source configured using BinLog mode.
- A PostgreSQL Source configured using Logical replication mode.
- An Amazon DynamoDB Source.
- A MongoDB Source configured using either Change Stream or OpLog mode.
- An Amazon Aurora Source configured using BinLog mode.
- A Microsoft SQL Server with individual jobs configured with `Change Tracking` as the query mode.
- An Oracle Source configured using Redo Logs mode.
Was this page helpful?
Thank you for helping improve Hevo's documentation. If you need help or have any questions, please consider contacting support.