ON THIS PAGE
What is a Pipeline?
A Pipeline moves your data from a Source to a Destination.
A Pipeline has the following components:
- Source: A Source is a database, an API endpoint or a File Storage which has the data that you want to analyze. Hevo integrates with a variety of Sources. Complete documentation on using various Sources can be found here.
- Transformations: Transformations are used when you want to clean, enrich or transform your data before loading it to your Destination. Check out Transformations to know more.
- Schema Mapper: Schema Mapper lets you map you source schemas to tables in your Destination warehouse. Check out Schema Mapper to know more.
- Destination: Destination is a data warehouse or data lake where that data collected from Sources is stored. Various Destination supported by Hevo is listed here.
- Articles in this section
- Familiarizing with the Pipelines UI
- Pipeline Concepts
- Pipeline Modes
- Types of Data Synchronization
- Pipeline Objects
- Data Ingestion
- Handling of Updates
- Data Loss Prevention
- Parsing Nested JSON Fields in Events
- Table and Column Name Compression
- Hevo-generated Metadata
- Working with Pipelines
- Python Code-Based Transformations
- Drag and Drop Transformations
- Transformation Reference
- Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Creating File Partitions for S3 Destination through Schema Mapper
- Schema Mapper Compatibility Table
Last updated on 25 Feb 2021