On This Page
Can I change the Destination table name after creating the Pipeline?
Yes, you can rename your Destination table name after creating the Pipeline. However, renaming creates a new Destination table and only the incremental data is loaded into this table. To load all your data into the new table, you need to restart the historical load for the Event Type from the Pipeline Detailed View page. You can rename the table:
Using the rename python code-based transformation.
Using the Rename Events drag and drop transformation.
Via the Schema Mapper.
Via the Destination workbench.
To change the Destination table name using the Destination workbench:
Click Workbench in the DESTINATIONS page.
Run the following SQL command in the Workbench:
ALTER TABLE TABLE_NAME RENAME TO NEW_TABLE_NAME
TABLE_NAME is the current table name and NEW_TABLE_NAME is the new name.
Click Refresh Schema in the Destination Overview page.
Navigate to the Schema Mapper page and disable Auto Mapping for the specific Event Type. This helps in avoiding sidelining of Events due to the change in the Destination table name.
Select Change Destination Table from the Kebab menu next to the Destination table name and select the new table that you have mentioned in the SQL query from the drop-down list.
Enable Auto Mapping for the Event Type.
The data gets loaded in the renamed Destination table.
How is new data from Google Sheets updated in the Destination?
For any update in the Google Sheets Source, Hevo fetches the entire sheet and loads it to the Destination. This is done because the Google Sheets API does not provide information as to which records got updated or inserted. Therefore, Hevo cannot fetch just the new or changed records. The only available information is the
last_modified_at timestamp of the sheet, resulting in a timestamp-based offset applicable at the sheet level. If any changes are detected via this offset, then the entire sheet is ingested again.
Further, as Google Sheets does not have the concept of a primary key, Hevo creates the metadata field,
__hevo_id, to use as a primary key for de-duplicating the records in the Destination table.
Read more about the Hevo metadata field __hevo_id.
I manually removed Hevo metadata columns from the Destination table. Will the loading of new Events generate new metadata?
Yes, even if you manually remove Hevo generated metadata columns from the Destination table, Hevo re-generates all these columns while ingesting new Events and creates the associated metadata. These columns are used along with the primary keys to deduplicate the ingested Events, which is why Hevo adds them back even after they are dropped.
The metadata columns created by Hevo are
__hevo__marked_deleted. Of these, you can choose not to add the
__hevo_loaded_at column to the Destination table. All other columns are required and when you delete them, the metadata information they hold is lost. In the next run of the Pipeline, while Hevo recreates these columns, the absence of similar metadata in the Destination table can result in some duplicates.
Can I create a Destination through API?
No, Hevo currently does not provide the feature of creating a Destination through API.
How do I enable or disable the deduplication of records in my Destination tables?
Use the Append Rows on Update option within a Destination table to indicate whether the ingested Events must be directly appended as new rows, or should these be checked for duplicates. You can specify this setting for each table.
Note: This feature is available only for Amazon Redshift, Google BigQuery, and Snowflake data warehouse Destinations. For RDBMS Destinations such as Aurora MySQL, MySQL, Postgres, and SQL Server, deduplication is always done.
In the Destination Detailed View page:
- Click the icon next to the table name in the Destination Tables List.
Update the Append rows on update option, as required:
Option setting Description Enabled Events are appended as new rows without deduplication Disabled Events are de-duplicated
- Click OK, GOT IT in the confirmation dialog to apply this setting.
Note: If you disable this feature after having previously enabled it, uniqueness is ensured only for future records in case of Google BigQuery and Snowflake. Therefore, both old and new versions of the same record may exist. In case of Amazon Redshift, however, uniqueness can be achieved for the entire data upon disabling the feature.
How do I resolve duplicate records in the Destination table?
Duplicate records may occur in the Destination table in two scenarios:
- When there is no primary key in the Destination table to carry out the deduplication of records.
When the Append Rows on Update setting is enabled for the table.
You can either set up the primary key and re-run the ingestion for that object from the Pipeline Overview tab of the Pipeline Detailed View, or, disable the Append Rows in Update setting for the table.
Note: The changes are applied only on the data loaded subsequently.
Can I have the same Source and Destination in the Pipeline?
Yes, you can create a Pipeline with the same Source and Destination instance. However, you cannot move data between the same database via the Pipeline.
How do I filter deleted Events from the Destination?
Events deleted at Source have the value of the boolean column,
__hevo__marked_deleted, set to True in the Destination. You can filter these Events by creating a Model using the
To do this, use the following query in your Model:
SELECT * FROM table_name WHERE __hevo__marked_deleted = false
where table_name is the Destination table containing all Source Events. For more information, read Creating a Model.
After you run the Model, the Destination table has only those Events that are existing in the Source table.
Note: This method works only for database Sources having Pipeline in Logical Replication Mode. For non-database Sources and database Pipelines in other modes (Table and Custom SQL mode), you must manually delete the Events from the Destination table.
How can I change from a service account to a user account in my BigQuery Pipeline?
You cannot change from a service account to a user account or vice-versa once you have created the Pipeline with BigQuery as the Destination. This is because BigQuery Destinations configured with service and user accounts are treated as two different Destinations. Therefore, you must create a different Pipeline and authenticate Hevo through the user account.
How can I filter specific fields before loading data to the Destination?
You can filter specific fields of an Event before the Destination table gets created for the corresponding Event Type, by using any one of the following:
Note: Event Types from the Source are mapped to Tables in your Destination, while fields are mapped to the table columns.
The Pipeline is created with Auto Mapping disabled.
To filter columns using Python-based Transformations:
- Create the Pipeline with Auto Mapping disabled.
Click Transformations to access the Python-based Transformation interface.
Write the following code in the CODE console of the Transformation:
from io.hevo.api import Event def transform(event): properties = event.getProperties() #list of columns to be deleted, below is the sample for same del_columns=["item","qyt"] for i in properties.keys(): if i in del_columns: del properties[i] return event
Click GET SAMPLE and then click TEST to test the transformation.
- Once the Transformation is deployed, enable Auto Mapping for the Event Type from the Schema Mapper. Auto Mapping automatically maps Event Types and fields from your Source to the corresponding tables and columns in the Destination excluding the filtered columns.
Read more at Python Code-Based Transformations.
To filter columns using Drag and Drop Transformations:
Create the Pipeline with Auto Mapping disabled.
Click Transformations to access the Python-based Transformation interface.
In the Python-based Transformation interface, click ENABLE DRAG-AND-DROP INTERFACE.
Drag the Drop Fields Transformation block to the canvas.
Specify the Event Type from which you want to drop the fields. Alternatively, skip this filter to apply the transformation to all Event Types.
Select the Field filters to specify the fields to be dropped.
Once the Transformation is deployed, enable Auto Mapping for the Event Type from the Schema Mapper. Auto Mapping automatically maps Event Types and fields from your Source to the corresponding tables and columns in the Destination excluding the filtered columns.
Read more at Drag and Drop Transformations.
To filter columns using Schema Mapper:
Once the Pipeline gets created, in the Schema Mapper, select the required Event Type.
Click CREATE TABLE & MAP.
Deselect the columns that you want to exclude from the Destination table.
Specify the Destination Table Name and click CREATE TABLE & MAP.
Once the table is created, enable Auto Mapping for the Event Type from the Schema Mapper. Auto Mapping automatically maps Event Types and fields from your Source to the corresponding tables and columns in the Destination excluding the filtered columns.
Read more at Schema Mapper.
The Destination table is created with the remaining columns, and the data is replicated into this table as per the Pipeline schedule.
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Jan-24-2022||NA||Added the following FAQs:
- Can I change the Destination table name after creating the Pipeline?
- How is new data from Google Sheets updated in the Destination?
- I manually removed Hevo metadata columns from the Destination table. Will the loading of new Events generate new metadata?
|Dec-10-2021||NA||Added the FAQ, Can I create a Destination through API?|
|Sep-20-2021||NA||Added the FAQ, How can I filter specific fields before loading data to the Destination?|
|Sep-09-2021||NA||Added the FAQ How can I change from a service account to a user account in my BigQuery Pipeline?|
|Aug-09-2021||NA||Added the FAQ How do I filter deleted Events from the Destination?|
|Mar-09-2021||NA||Added the FAQ How do I resolve duplicate records in the Destination table?|