- Introduction
- Getting Started
- Creating an Account in Hevo
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert Date String to Required Format
- Convert Date to Required Format
- Convert Datetime String to Required Format
- Convert Epoch Time to a Date
- Convert Epoch Time to a Datetime
- Convert Epoch to Required Format
- Convert Epoch to a Time
- Get Time Difference
- Parse Date String to Date
- Parse Date String to Datetime Format
- Parse Date String to Time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Activity Log
-
Pipeline FAQs
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I change the Destination post-Pipeline creation?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I set a field as a primary key?
- How do I ensure that records are loaded only once?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
-
Engineering Analytics
- Apify
- Asana
- Buildkite
- GitHub
-
Streaming
- Android SDK
- Kafka
-
REST API
- Writing JSONPath Expressions
-
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Webhook
- GitLab
- Jira Cloud
- Opsgenie
- PagerDuty
- Pingdom
- QuickBooks Time
- Trello
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Amazon Ads
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Advertising
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
-
Source FAQs
- From how far back can the Pipeline ingest data?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Why are my selected Source objects not visible in the Schema Mapper?
- How does the Merge Table feature work?
- Destinations
- Familiarizing with the Destinations UI
- Databases
-
Data Warehouses
- Amazon Redshift
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- Transform
- Alerts
- Account Management
- Personal Settings
- Team Settings
-
Billing
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
-
Billing FAQs
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Account Suspension and Restoration
- Account Management FAQs
- Activate
- Glossary
- Release Notes
- Release Version 2.16.4
- Release Version 2.16.3
- Release Version 2.16.2
- Release Version 2.16.1
- Release Version 2.16
- Release Version 2.15
- Release Version 2.14
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Amazon DocumentDB
Amazon DocumentDB is a fast, secure, scalable, and fully managed database service that is compatible with MongoDB. It allows you to store and query JSON data, as well as set up, operate, and scale MongoDB-compatible databases in the cloud. Amazon DocumentDB also supports the same application code, drivers, and tools as MongoDB.
Hevo uses DocumentDB Change Streams to ingest data from your Amazon DocumentDB database and replicate it into the Destination of your choice.
Prerequisites
-
An active Amazon Web Services (AWS) account is available.
-
The Amazon DocumentDB version is 4.0 or higher.
-
A security group for the DocumentDB cluster is created in your AWS EC2 console.
Note: You must specify the security group that you used for Whitelisting Hevo’s IP address in the Source column of the Inbound rules section, while creating the security group for the DocumentDB cluster.
-
An Amazon DocumentDB cluster is created.
Note: You must select the security group that you created in the VPC security groups drop-down, while creating a DocumentDB cluster.
-
You are connected to your Amazon EC2 .
-
The mongo shell is installed for your operating system and is connected to your DocumentDB cluster.
-
Streams are enabled on the DocumentDB cluster, and the log retention duration is updated.
-
A user is created with the required privileges in your Amazon DocumentDB database.
-
You are assigned the Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo to create the Pipeline.
Perform the following steps to configure your Amazon DocumentDB Source:
Whitelist Hevo’s IP Address
You must whitelist Hevo’s IP address in your existing Amazon EC2 instance in order to connect to Hevo. Read Creating an Amazon EC2 instance if you have not created one already. Hevo needs this EC2 instance to create an SSH tunnel to connect to your DocumentDB cluster and replicate data from it.
Perform the following steps to whitelist Hevo’s IP address in your existing EC2 instance:
-
Log in to your Amazon EC2 console.
-
In the left navigation pane, under Network & Security, click Security Groups.
-
Click on the security group linked to your EC2 instance.
-
In the Inbound rules tab, click Edit inbound rules.
-
In the Edit inbound rules page, in the Source column, select Custom from the drop-down, and enter Hevo’s IP address for your region.
-
Click Save rules.
Enable Streams
You need to enable Streams on the DocumentDB collections and databases whose data you want to replicate to the Destination through Hevo.
To do this:
-
Open your mongo shell.
-
Depending on the collections and databases you want to sync, run one of the following commands:
-
To enable change streams for a specific collection in a specific database:
db.adminCommand({modifyChangeStreams: 1, database: "<database_name>", collection: "<collection_name>", enable: true}); Replace <database_name> and <collection_name> with a database and collection name of your choice.
-
To enable change streams for all collections in a specific database:
db.adminCommand({modifyChangeStreams: 1, database: "<database_name>", collection: "", enable: true}); Replace <database_name> with a database name of your choice.
-
To enable change streams for all collections in all databases:
db.adminCommand({modifyChangeStreams: 1, database: "", collection: "", enable: true});
-
Modify the Change Stream Log Retention Duration
The change stream retention duration is the period for which Events are held in the change stream logs. If an Event is not read within that period, then it is lost.
This may happen if:
-
The change stream log is full, and the database has started discarding the older Event entries to write the newer ones.
-
The timestamp of the Event is older than the change stream retention duration.
The change stream log retention duration directly impacts the change stream log size that you must maintain to hold the entries.
By default, Amazon DocumentDB retains the Events for three hours after recording them. You must maintain an adequate size or retention duration of the change stream log for Hevo to read the Events without losing them. Hevo recommends that you modify the retention duration to 72 hours (259200 seconds).
To extend the change stream log retention duration:
-
Log in to your Amazon DocumentDB console.
-
In the left navigation pane, click Parameter Groups.
-
Select the cluster parameter group associated with your cluster. Read Determining an Amazon DocumentDB Cluster’s Parameter Group for more information.
-
In the Cluster parameters section, search and select change_stream_log_retention_duration, and then click Edit.
-
Modify the Value to 259200 seconds.
Note: The Value should be in seconds only.
-
Click Modify cluster parameter.
Create User and Set up Permissions to Read DocumentDB Databases
Perform the following steps to create a database user, and grant READ privileges to that user:
-
Open your mongo shell.
-
Run the following command to create a user and grant READ permissions to that user.
use admin db.createUser({ user: "<username>", pwd: "<password>", roles: [ "readAnyDatabase" ] }) Replace <username> and <password> with a username and password of your choice.
Configure Amazon DocumentDB Connection Settings
Perform the following steps to configure Amazon DocumentDB as the Source in your Pipeline:
-
Click PIPELINES in the Navigation Bar.
-
Click + CREATE in the Pipelines List View.
-
In the Select Source Type page, select Amazon DocumentDB.
-
In the Configure your Amazon DocumentDB Source page, specify the following:
-
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
-
Database Host: The IP address or Domain Name System (DNS) of your primary instance in the AWS console. You can find the primary instance for your cluster under the Role column of your AWS console.
-
Database Port: The port on which your DocumentDB server listens for connections. Default value: 27017.
-
Database User: The database user that you created. This authenticated user has the permissions to read collections in your database.
-
Database Password: The password of the database user.
-
Authentication Database Name: The database that stores the user’s information. The user name and password entered in the preceding steps are validated against this database. Default value: admin.
-
SSH IP: The IP address or DNS of the SSH server.
-
SSH PORT: The port of the SSH server as seen from the public internet. Default port: 22.
-
SSH USER: The username on the SSH server. For example, hevo.
The SSH IP, port, and user credentials must be obtained from the AWS EC2 instance where you whitelisted Hevo’s IP address. Hevo connects to your DocumentDB cluster using these SSH credentials, instead of directly connecting to your Amazon DocumentDB database host. This provides an additional level of security to your database by not exposing your Amazon DocumentDB setup to the public. Read Connecting Through SSH.
-
Connect through SSH: Hevo connects to your DocumentDB cluster using an SSH tunnel, instead of directly connecting your Amazon DocumentDB database host to Hevo. This provides an additional level of security to your database by not exposing your Amazon DocumentDB setup to the public. Read Connecting Through SSH.
-
Use SSL: Enable this option if you have enabled SSL at the DocumentDB server end.
-
Advanced Settings:
-
Load All Databases: If enabled, Hevo fetches data from all the databases you have access to on the specified host. If disabled, provide the comma-separated list of the database names from which you want to fetch the data.
-
Merge Collections: If enabled, collections with the same name across different databases are merged into a single Destination table. If disabled, separate tables are created and prefixed with the respective database name.
-
-
Load Historical Data: If enabled, the entire table data is fetched during the first run of the Pipeline. If disabled, Hevo loads only the data that was written in your database after the time of the creation of the Pipeline.
-
Include New Tables in the Pipeline: If enabled, Hevo automatically ingests data from tables created after the Pipeline is built. If disabled, the new tables are listed in the Pipeline Detailed View in Skipped state, and you can manually include the ones you want and load their historical data.
You can change this setting later.
-
-
Click TEST & CONTINUE.
-
Proceed to configuring the data ingestion and setting up the Destination.
Source Considerations
-
DocumentDB does not support reading change streams from replica instances, so Hevo cannot connect to DocumentDB replica instances.
-
DocumentDB does not support null values for the
_id
field.The
_id
field in a DocumentDB collection serves as its primary key. Therefore, commands that use_id
as a parameter, such as commands to fetch, sort, or update data, do not run successfully if you provide a null value in the_id
field.For example, when you run the following command in DocumentDB to select and sort data according to their
_id
values, you get a null pointer exception while fetching the document if the_id
field does not hold a value:db.collection.aggregate({ $group : { _id : {$type:"$_id"}, type: {$min:"$_id"} } }
-
DocumentDB does not support BSON documents larger than 16 MB in change streams.
The change streams response documents must adhere to the 16 MB limit for BSON documents. In case the size of the document exceeds 16 MB, the Pipeline is paused and an error is displayed stating, Failed to read documents from the Change Stream. Documents larger than 16 MB are not supported. This applies to scenarios such as when the update operation in Change Streams is configured to return the full updated document, or an insert or replace operation is performed within a document that is at or just below the specified size limit.
Limitations
None.
See Also
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change |
---|---|---|
Sep-07-2022 | NA | Updated section, Configure Amazon DocumentDB Connection Settings to reflect the latest Hevo UI. |
Jul-15-2022 | 1.92 | New document. |