- Introduction
- Getting Started
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Pipeline Objects
-
Transformations
-
Python Code-Based Transformations
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
- TimeUtils
- Utils
- Examples of Python Code-based Transformations
-
Transformation Methods in the Event Class
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
-
Pipeline FAQs
- Does creation of Pipeline incur cost?
- Why are my new Pipelines in trial?
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I delete skipped objects in a Pipeline?
- Can I change the Destination post-Pipeline creation?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I sort Event Types listed in the Schema Mapper?
- How do I include new tables in the Pipeline?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I restart the historical load for all the objects?
- How do I set a field as a primary key?
- How can I load only filtered Events to the Destination?
- How do I ensure that records are loaded only once?
- Why do the Source and the Destination events count differ?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
-
Engineering Analytics
- Apify
- Asana
- Buildkite
- GitHub
-
Streaming
- Android SDK
- Kafka
-
REST API
- Writing JSONPath Expressions
-
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Webhook
- GitLab
- Jira Cloud
- Opsgenie
- PagerDuty
- Pingdom
- Trello
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Freshsales
- Google Campaign Manager
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Advertising
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
-
Source FAQs
- From how far back can the Pipeline ingest data?
- Why am I unable to modify my OAuth Source connection settings?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Which file formats are supported by file storage-based Sources?
- Why are my selected Source objects not visible in the Schema Mapper?
- How can I transfer Excel files using Hevo?
- How does the Merge Table feature work?
- Destinations
- Familiarizing with the Destinations UI
- Databases
-
Data Warehouses
- Amazon Redshift
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I create a Destination through API?
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- Transform
- Alerts
- Account Management
- Personal Settings
- Team Settings
-
Billing
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
-
Billing FAQs
- Can I try Hevo for free?
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- When will I be charged for my subscription?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- What are the payment methods available in Hevo?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Is my billing information removed upon account deletion?
- Account Suspension and Restoration
- Account Management FAQs
- Activate
- Glossary
- Release Notes
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Amazon DynamoDB
Amazon DynamoDB is a fully managed, multi-master, a multi-region non-relational database that offers built-in in-memory caching to deliver reliable performance at any scale.
Hevo uses DynamoDB’s data streams to support change data capture (CDC). Data streams are time-ordered sequences of item-level changes in the DynamoDB tables. All data in DynamoDB streams are subject to a 24-hour lifetime and are automatically removed after this time. We suggest that you keep the ingestion frequency accordingly.
To facilitate incremental data loads to a Destination, Hevo needs to keep track of the data that has been read so far from the data stream. Hevo supports two ways of replicating data to manage the ingestion information:
Refer to the table below to know the differences between the two methods.
Kinesis Data Streams | DynamoDB Streams |
---|---|
Recommended method. | Default method, if DynamoDB user does not have dynamodb:CreateTable permissions. |
User permissions needed on the DynamoDB Source: - Read-only - dynamodb:CreateTable |
User permissions needed on the DynamoDB Source: - Read-only |
Uses the Kinesis Client Library (KCL) to ingest the changed data from the database. | Uses the DynamoDB library. |
Guarantees real-time data ingestion | Data might be ingested with a delay |
The Kenesis driver maintains the context. KCL creates an additional table (with prefix hevo_kcl ) per table in the Source system, to store the last processed state for a table. |
Hevo keeps the entire context of data replication as metadata, including positions to indicate the last record ingested. |
Prerequisites
-
An active Amazon Web Services (AWS) account is available.
-
Streams are enabled on the DynamoDB tables to be replicated and the value of the StreamViewType parameter is set to New and old images (NEW_AND_OLD_IMAGES in the CLI). Without this configuration, while the historical data is successfully ingested, the incremental data ingestion fails.
-
An AWS IAM Policy is created with the required permissions for the DynamoDB user to ingest data from the DynamoDB database (if using Amazon Kinesis Data Streams).
Note: Hevo does not modify any data in the Source tables. The permissions are used solely to store the last processed state for a table by the KCL.
Perform the following steps to configure your Amazon DynamoDB Source:
Enable Streams
You need to enable Streams on all DynamoDB tables you want to sync through Hevo. To do this:
-
Sign in to the AWS Management Console and select the DynamoDB service.
-
In the left navigation bar of the DynamoDB console, under Dashboard, select Tables, and then, select the table for which you want to enable streams. For example,
customer
in the image below. -
In the Exports and streams tab, scroll down to the DynamoDB stream details section and click Enable.
-
In the Enable DynamoDB stream page, select New and old images and click Enable stream.
-
Repeat Steps 2-4 for all the tables you want to synchronize.
Create an IAM Policy
Note: An IAM policy is needed for KCL (Kinesis Data Streams) only.
The policy is an object in AWS, which, when associated with an identity or resource, defines their permissions. Therefore, when Hevo makes a request to access the data in your DynamoDB account, the policy is applied to the related API. AWS evaluates the permissions in the policy to determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents.
Perform the following steps to create the IAM policy:
-
Log in to the Amazon IAM Console.
-
Click Policies in the left navigation bar.
-
Click Create policy in the right pane.
-
Click the JSON tab and paste the following policy into the editor. The JSON statements list the permissions the policy would assign to Hevo.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "dynamodb:DescribeStream", "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:GetRecords", "dynamodb:GetShardIterator", "dynamodb:ListStreams", "dynamodb:ListTables", "dynamodb:Scan", "dynamodb:CreateTable", "dynamodb:PutItem", "dynamodb:GetItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem" ], "Resource": [ "*" ] } ] }
Note: Hevo does not modify any data in the Source tables. The permissions are used solely to store the last processed state for a table by the KCL.
-
Click Review policy.
-
In the Review policy page, provide a Name for the policy. For example, Hevo-access.
-
(Optional) Provide a Description.
-
Click Create policy. You can view the new policy in the list.
Create the AWS Access Key and the AWS Secret Key
The AWS Access Key and the AWS Secret Key allow Hevo to establish authentication and replicate your Amazon DynamoDB data into your desired Destination system. You need to specify these while configuring Amazon DynamoDB as a Source in Hevo.
To retrieve these:
-
Log in to the Amazon IAM Console.
-
In the left navigation pane, under Groups, click Users, and then, click the User name for which you want to create an access key.
-
In the Summary page, click the Security credentials tab, and then, click Create access key.
-
In the Create access key dialog box:
-
Locate the AWS Access Key under Access key ID.
-
Click Show under Secret access key to view the AWS Secret Key.
Note: Once you exit this dialog box, you cannot access the same AWS Access Key and the AWS Secret key. However, you can create a new key and secret by repeating these steps.
-
Optionally, click Download .csv file to download and save the AWS Access Key and the AWS Secret key in your local machine.
-
Retrieve the AWS Region
To configure Amazon DynamoDB as a Source in Hevo you need to provide the AWS region where your DynamoDB instance is running.
To know your AWS region:
-
Log in to the Amazon DynamoDB Console.
-
On the top-right, locate the AWS region.
Configure Amazon DynamoDB Connection Settings
Perform the following steps to configure DynamoDB as a Source in Hevo:
-
Click PIPELINES in the Asset Palette.
-
Click + CREATE in the Pipelines List View.
-
In the Select Source Type page, select DynamoDB.
-
In the Configure your DynamoDB Source page, specify the following:
-
Pipeline Name: A unique name for the Pipeline, not exceeding 255 characters.
-
Advanced Settings:
-
Load Historical Data: If this option is enabled, the entire table data is fetched during the first run of the Pipeline. If disabled, Hevo loads only the data that was written in your database after the time of creation of the Pipeline.
-
Include New Tables in the Pipeline: Applicable for all Pipeline modes except Custom SQL.
If enabled, Hevo automatically ingests data from tables created in the Source after the Pipeline has been built. These may include completely new tables or previously deleted tables that have been re-created in the Source.
You can change this setting later.
-
-
-
Click TEST & CONTINUE to set up the job settings.
-
You’ll get a list of the tables available to replicate. Note that Hevo will only be able to ingest data from the tables for which DynamoDB Streams is enabled. Deselect the tables you don’t want to replicate. Click Continue to configure the Destination.
-
Select the Destination where you want to replicate DynamoDB tables or click on ADD DESTINATION to create a new Destination. Read Destinations for more information.
Hevo defers the data ingestion for a pre-determined time in the following scenarios:
- If your DynamoDB Source does not contain any new Events to be ingested.
- If you are using DynamoDB Streams and the required permissions are not granted to Hevo.
Hevo re-attempts to fetch the data only after the deferment period elapses.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Schema and Type Mapping
Hevo replicates the schema of the tables from the Source DynamoDB as-is to your Destination database or data warehouse. In rare cases, we skip some columns with an unsupported Source data type while transforming and mapping.
The following table shows how your DynamoDB data types get transformed to a warehouse type.
DynamoDB Data Type | Warehouse Data Type |
---|---|
String | VARCHAR |
Binary | Bytes |
Number | Decimal/Long |
STRINGSET | JSON |
NUMBERSET | JSON |
BINARYSET | JSON |
Map | JSON |
List | JSON |
Boolean | Boolean |
NULL | - |
Limitations
- On every Pipeline run, Hevo ingests the entire data present in the Dynamo DB data streams that you are using for ingestion. Data streams can store data for a maximum of 24 hrs. So, depending on your Pipeline frequency, some events might get re-ingested in the Pipeline and consume your Events quota. Read Pipeline Frequency to know how it affects your Events quota consumption.
See Also
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change |
---|---|---|
Nov-08-2022 | NA | Added section, Limitations. |
Oct-17-2022 | 1.99 | Updated section, Configure Amazon DynamoDB Connection Settings to add information about deferment of data ingestion if required permissions are not granted. |
Oct-04-2021 | 1.73 | - Updated the section, Prerequisites to inform users about setting the value of the StreamViewType parameter to NEW_AND_OLD_IMAGES. - Updated the section, Enable Streams to reflect the latest changes in the DynamoDB console. |
Aug-8-2021 | NA | Added a note in the Source Considerations section about Hevo deferring data ingestion in Pipelines created with this Source. |
Jul-12-2021 | 1.67 | Added the field Include New Tables in the Pipeline under Source configuration settings. |
Feb-22-2021 | 1.57 | Added sections: - Create the AWS Access Key and the AWS Secret Key - Retrieve the AWS Region |