Hevo System Architecture

On This Page

Hevo is available as a public SaaS offering and is hosted on Amazon’s AWS. Hevo’s multi-tenant platform leverages several components of Amazon’s AWS cloud for its infrastructure and is designed to process billions of records. It can automatically scale up or down based on the workload needs. Hevo can also be deployed as a service in your private AWS account so that your data remains within your Virtual Private Cloud (VPC) and AWS region.

Hevo’s ETL/ELT Pipelines, fetch data from your data Source, perform in-flight Transformations based on the settings you configure, and load it to your Destination. Read ETL and ELT. Activate, Hevo’s reverse ETL offering, helps to send data from your data warehouse to any marketing, sales, or business application of your choice in analysis-ready form.

The various components in Hevo’s ELT/ETL and reverse-ETL solutions are:

  • Sources: Your repository of data, which could be a database, a file, or a SaaS application. Read Sources.

  • Connectors: The component that pulls or receives data from your Source.

  • Pipelines: The component where the main work gets done in the following phases:

    1. Ingestion: This is the first phase in your Pipeline where the Connectors read data from your Source in this phase. Read Data Ingestion.

    2. Transformation: Any modifications that you want to make to the ingested data before loading it to the Destination tables are done in this phase. Read Transformations.

    3. Schema Mapping: The Source data is translated to the Destination data in this phase. Any Destination tables that have not yet been created are also created now. Read Schema Mapper.

    4. Sink: The data staging and loading processes take place in this phase. For database Destinations, the transformed data is loaded in near real-time to the Destination tables. For data warehouse Destinations, the transformed data is first written to files in cloud storage and then is copied to the Destination tables. Read Data Loading.

      Read Data flow in a Pipeline to understand how the data is processed in a Pipeline.

  • Destination and Warehouses: The database or data warehouse to which the data is loaded by the Pipelines. Read Destinations. Warehouses are data warehouses from which data is synchronized by Hevo Activate with your Target application. Read Activate Warehouses. You can also use the data warehouse Destination from your Pipeline as your Activate Warehouse.

  • Models and Workflows: The SQL query-based components used to transform the data in the Destination into a form conducive for analysis by BI tools. These form the Transform component of Hevo’s Extract-Load-Transform solution. Read Transform.

  • Activate: The automation platform that helps you identify the data from your data warehouse for analysis, and synchronize it with a Target. Read Activate.

  • Targets: The applications with which you can synchronize the data from your Warehouse for analysis in order to empower your teams such as Sales, Marketing, or Support. Read Activate Targets.




Revision History

Refer to the following table for the list of key updates made to this page:

Date Release Description of Change
Mar-21-2022 NA New document.
Last updated on 01 Apr 2022