On This Page
The data ingested from the Source is loaded to the Destination warehouse at each run of your Pipeline. In case your Events quota gets exhausted, the Events in the Pipeline are stored in a data warehouse till the time you purchase additional Events, upon which they are replayed. Read the data replication section of your Source to know its replication strategy.
By default, Hevo maintains any primary keys that are defined in the Source data, in the Destination tables.
You can load both types of data:
Data without Primary Keys
If primary keys are not present in the Destination tables, Hevo directly appends the data into the target tables. While this can result in duplicate Events occurring in the Destination, there is no resource overhead stemming from the data loading process.
Data with Primary Keys
If primary keys are present in the Source data but not enforceable on the Destination warehouse, as in the case of Google BigQuery, Amazon Redshift, and Snowflake, then, ensuring uniqueness of data is not possible by default. Hevo circumvents this lack of primary key enforcement and guarantees that no duplicate data is loaded to or exists in the Destination tables by:
Adding temporary Hevo-internal meta columns to the tables to identify eligible Events,
Using specific queries to cleanse the data of any duplicate and stale Events,
Adding metadata information to each Event to uniquely identify its ingestion and loading time
Note: These steps utilize your Destination system’s resources in terms of CPU usage for running the queries and additional storage utilization for the duration of processing of the data.
Additions to the Destination Schema
Irrespective of the type of data, Hevo adds the following columns to the Destination tables as part of the data loading process:
||A timestamp applied to each Event during ingestion. This timestamp helps to verify that the ingested Event is more current than what already exists in the Destination.
For example, by the time a failed Event is resolved and replayed, a more recent Event may already have been loaded to the Destination. Comparing the ingestion timestamp, the stale record can be discarded from the ingested data. The timestamp is also retained in the Destination table.
||A timestamp to indicate when data was inserted, updated, or deleted (delete flag updated) in the Destination table. The difference between
||A column to logically represent a deleted Event. When an Event is deleted in the Source, it is not physically deleted from the Destination table during the data loading process. Instead, the logical column,
||A column to logically represent a data record that was deleted in the Stripe Source.
Hevo ingests the deleted Event, sets the column,
||A timestamp to indicate when the data was deleted in the Stripe Source.
You can use the value of
Note: Hevo also adds the internal columns
__hevo__consumption_id to the ingested data to help with the deduplication process; these columns are removed before the final step of loading data into the Destination tables.
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Apr-11-2022||NA||- Reorganized content.
- Added the metadata columns that Hevo generates to handle deletes in a Stripe Source to the Additions to the Destination Schema section.