Naming Conventions for Destination Data Entities

Last updated on Jun 09, 2025

Edge Pipeline is currently available under Early Access. You can request access to evaluate and test its features.

Hevo Edge utilizes a consistent naming convention for the data entities it creates across all supported Destinations. This approach ensures uniformity and makes it easier to identify and locate the data entities created by Edge Pipelines in your Destination.

Note: This convention is currently employed for naming schemas, datasets, and tables created in Amazon Redshift, Google BigQuery, and Snowflake Destinations. It does not apply to data entities in existing Edge Destinations.

When configuring a Pipeline in Edge, you need to specify a unique Destination prefix. Hevo combines this unique prefix along with the name of your Source database or schema to create the name of your Destination data entity.

Refer to the following table for the data entity created based on the Destination type:

Destination Type Destination Data Entity
-   Amazon Redshift
-   Snowflake
Schema
-   Google BigQuery Dataset

Hevo Edge Pipelines load data into the tables created in your Destination data entity. These tables have the same names as their corresponding Source tables.


Destination Types and Naming Conventions

Hevo creates the required data entity, a schema or dataset, in your Destination. Hence, you no longer need to specify an existing schema or dataset while creating a Destination or configuring one in your Edge Pipeline.

Note: This naming convention does not apply to existing Edge Destinations.

Hevo Edge adopts a consistent naming convention across all supported Destinations. However, the name of the created Destination dataset or schema depends on the Source type. For example, in the case of an Edge Pipeline that syncs data from a MySQL Source with an Amazon Redshift Destination, the schema created in the Destination is named as follows:

${<destination_prefix>}_${<source_database_name>}

Although Hevo decides the naming pattern, it adheres to the conventions defined by the respective Destination types for identifier names. For example, by default, an identifier name in Amazon Redshift is converted to lowercase, whereas it is converted to uppercase in Snowflake. However, for the Snowflake Destination type, Hevo provides an option to retain the names of the Source objects as-is. As a result, if this option is enabled for the Snowflake Destination configured in your Edge Pipeline, Hevo retains the casing of the specified Destination prefix.

Refer to the Examples section to understand how the Source data type affects the naming of the Destination data entity.

Examples

Destination Type: Amazon Redshift

Source Type Source Database(s) Source Object(s) Destination Prefix Destination Schema Names Destination‌ Table‌ Names
Amazon RDS MySQL sakila sakila.actor mysql_rs mysql_rs_sakila mysql_rs_sakila.actor


Destination Type: Snowflake

Source Type Source Database Source Schema(s) Source Object(s) Destination Prefix Destination Schema Names Destination‌ Table‌ Names
Amazon RDS PostgreSQL menagerie public public.city pgsql_all Quote table names and columns: Yes
pgsql_all_public

Quote table names and columns: No
PGSQL_ALL_PUBLIC
Quote table names and columns: Yes
pgsql_all_public.city

Quote table names and columns: No
PGSQL_ALL_PUBLIC.CITY


Destination Type: Google BigQuery

Source Type Source Database Source Schema(s) Source Object(s) Destination Prefix Destination Dataset Names Destination Table Names
Amazon RDS Oracle menagerie ROOT ROOT.CONTACTS orcl_bq orcl_bq_ROOT orcl_bq_ROOT.CONTACTS

Tell us what went wrong