ON THIS PAGE
What is a Pipeline?
A Pipeline moves your data from a Source to a Destination.
A Pipeline has the following components:
- Source: A Source is a database, an API endpoint or a File Storage which has the data that you want to analyze. Hevo integrates with a variety of Sources. Complete documentation on using various Sources can be found here.
- Transformations: Transformations are used when you want to clean, enrich or transform your data before loading it to your destination. Check out Transformations to know more.
- Schema Mapper: Schema Mapper lets you map you source schemas to tables in your destination warehouse. Check out Schema Mapper to know more.
- Destination: Destination is a data warehouse or data lake where that data collected from Sources is stored. Various destination supported by Hevo is listed here.
- Articles in this section
- Familiarizing with the Pipelines UI
- Pipeline Concepts
- Pipeline Modes
- Types of Data Synchronization
- Pipeline Objects
- Data Ingestion Statuses
- Handling of Updates
- Data Loss Prevention
- Introduction to __hevo_id
- Parsing Nested JSON Fields in Events
- Table and Column Name Compression
- Working with Pipelines
- Python Code-Based Transformations
- Drag and Drop Transformations
- Transformation Reference
- Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Resizing String Columns in the Destination
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Bulk Actions in Schema Mapper
- Auto Mapping Event Types
- Creating File Partitions for S3 Destination through Schema Mapper
- Schema Mapper Compatibility Table
Last updated on 14 Dec 2020