Job Types (Edge)

Last updated on Sep 25, 2024

Edge Pipeline is currently available under Early Access. You can request access to evaluate and test its features.

Jobs are created based on the type of data that Hevo ingests from your Source application or database. For example, existing data is replicated using historical jobs while new and changed data is replicated using incremental jobs. Both these job types are explained below.

Historical Jobs

The historical job performs a one-time initial load of data that already exists in the Source at the time of creation of the Pipeline. This is the first job to be run for your Pipeline.

All Sources in Hevo support the loading of historical data. Once the data is ingested, the status of the job changes to Completed. If you restart an object, the historical data for it is re-ingested and is not billed.

If primary keys are defined in the Source, Hevo uses them to replicate data to the Destination. In other cases, Hevo asks you to provide a set of unique columns for each table or collection during the Pipeline creation process, if you have selected the Merge load mode. Without this information, the object moves into Disabled state. No table is created in the Destination and no data is replicated for disabled objects.

Regardless of the load mode, existing primary keys cannot be altered for an object.

To avoid overwriting updated data with older data, historical and incremental loads never occur in parallel.


Incremental Jobs

Incremental data is the changed data that is fetched continuously. For example, log entries for databases.

Incremental load updates only the new or modified data in the Source. After the historical load, Hevo loads all of the objects using incremental updates. During incremental load, Hevo maintains an internal position, which lets Hevo mark the last successful load.

Incremental load offers efficiency by updating only the changed data instead of re-ingesting the entire data for the objects.

Tell us what went wrong