A Pipeline moves your data from a Source system to a Destination database or data warehouse. It has the following components:
Source: A Source can be a database, a SaaS-based application (an API endpoint) or a file storage which has the data that you want to analyze. Hevo integrates with a variety of Sources. Read Sources.
Transformations: Transformations are useful when you want to clean, enrich, or transform your data before loading it to your Destination. Read Transformations.
Schema Mapper: Schema Mapper helps you map your Source schemas to tables in your Destination. Read Schema Mapper.
Destination: Destination is a data warehouse or database where the data fetched from a Source is loaded. Read Destinations.
You can connect only one Source and Destination in a Pipeline. However, multiple Pipelines may load to the same Destination.
- Articles in this section
- Data Flow in a Pipeline
- Ingestion Modes
- Familiarizing with the Pipelines UI
- Pipeline Objects
- Working with Pipelines
- Best Practices for Creating Database Pipelines
- Creating a Pipeline
- Scheduling a Pipeline
- Modifying a Pipeline
- Prioritizing a Pipeline
- Viewing Pipeline Progress
- Troubleshooting Data Replication Errors
- Pausing and Deleting a Pipeline
- Log-based Pipelines
- Python Code-Based Transformations
- Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Failed Events in a Pipeline
- Pipeline FAQs