Data Ingestion

Last updated on Sep 25, 2024

Edge Pipeline is currently available under Early Access. You can request access to evaluate and test its features.

The process of accessing and fetching the data residing in your Source is called data ingestion. Hevo starts a job to pull data from your Source and sync it with a Destination of your choice. This job, which comprises the ingestion and loading tasks, runs on a schedule defined by the sync frequency.

Hevo ingests the data in one of the following ways:

Types of Data

Hevo runs jobs in your Pipeline to replicate the following types of data:

  • Historical: This is the data that already exists in your Source at the time of Pipeline creation. The historical job, which is the first job run in your Pipeline, ingests this data as the historical load. Events ingested in this load are not billed.

  • Incremental: In this type, any new and modified data is ingested from the Source. Hevo runs an incremental job, after the historical job is completed, to fetch this data. Events ingested as incremental data are billed.

Read Job Types for more information on the jobs run in your Hevo Pipeline.


Re-ingestion of Data

Once you have created the Pipeline, you can monitor the jobs running in your Pipeline from the Job History tab. You can click the Job ID and from the Objects section, view the status of data ingestion task. Here, you can also view the ingestion task status for each object to know its progress and success. Read Object Statuses (to be linked) for information on the statuses through which the ingestion task transitions.


Tell us what went wrong