Factors Affecting Event Usage

Last updated on Jan 08, 2024
On This Page

The following interactive diagram illustrates the factors affecting the events usage and provides a summary view of the section’s content. Click on any item in the diagram to view the related information.

Data Size Destinations like Redshift and PostgreSQL may break down nested Events into sub-records, potentially inflating the number of loaded Events. Query Mode The Full Load query mode increases usage by reloading all the Events every time. Pipeline Frequency Optimize the ingestion frequency to enhance the data loading efficiency. Conversion Window Number of Events ingested during data refresh depends on the conversion window for ad-based Sources. Factors Affecting Event Usage Number of Source Objects Increased number of Source objects leads to higher Event consumption during data loading. Objects and Event Types Optimize Events usage by loading only relevant objects and Event Types. Object Restarts and Offset Changes Restarting the object and changing the offset incurs data ingestion and loading costs. Transformations Transformations that filter Events, add or remove fields, create child Events, or split Events, may reduce or increase data usage.

Some settings and choices that affect the number of Events you consume include:

  • The number of Source objects you ingest data from. More the objects, higher the Events consumption in loading the data.

  • Skipped objects and Event Types. You can select the objects Hevo must ingest data from while creating the Pipeline to avoid loading the Events you do not need. Similarly, if multiple Event Types are created from a Source object due to any Transformations that you apply, you can choose to load only the ones you need and skip the others via the Schema Mapper to reduce your Events usage.

  • The structure of the data. The amount of data in the selected tables and their structure. Destinations such as Redshift and PostgreSQL break down nested Events and count each sub-record as a row. This can result in a higher number of Events to be loaded to the Destination compared to the ingested count. Refer to Parsing Nested JSON Fields in Events for more information.

  • Pipeline frequency

  • Query mode

  • Conversion Window (Ad-based Sources)

  • Loading Frequency

  • Object restarts and offset changes. When you restart ingestion for an object, any historical data is always loaded for free. For Sources configured with a moving historical sync duration, for example, last 30 days, the historical data is re-ingested for the last 30 days from when you perform the restart action, and this too is free. In Table mode-based Pipelines, you are charged for Events dated post-Pipeline creation. In Log-based Pipelines, Events reloaded from the log are chargeable. In PostgreSQL, this is the log created at the time of Pipeline-creation.

  • Transformations that result in adding or dropping of fields or filtering of Events.


Revision History

Refer to the following table for the list of key updates made to this page:

Date Release Description of Change
Mar-21-2022 NA New document.

Tell us what went wrong