- Introduction
- Getting Started
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert date string to required format
- Convert date to required format
- Convert datetime string to required format
- Convert epoch time to a date
- Convert epoch time to a datetime
- Convert epoch to required format
- Convert epoch to a time
- Get time difference
- Parse date string to date
- Parse date string to datetime format
- Parse date string to time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Activity Log
-
Pipeline FAQs
- Does creation of Pipeline incur cost?
- Why are my new Pipelines in trial?
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I delete skipped objects in a Pipeline?
- Can I change the Destination post-Pipeline creation?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I sort Event Types listed in the Schema Mapper?
- How do I include new tables in the Pipeline?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I restart the historical load for all the objects?
- How do I set a field as a primary key?
- How can I load only filtered Events to the Destination?
- How do I ensure that records are loaded only once?
- Why do the Source and the Destination events count differ?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
-
Engineering Analytics
- Apify
- Asana
- Buildkite
- GitHub
-
Streaming
- Android SDK
- Kafka
-
REST API
- Writing JSONPath Expressions
-
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Webhook
- GitLab
- Jira Cloud
- Opsgenie
- PagerDuty
- Pingdom
- Trello
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Campaign Manager
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Advertising
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
-
Source FAQs
- From how far back can the Pipeline ingest data?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Why are my selected Source objects not visible in the Schema Mapper?
- How can I transfer Excel files using Hevo?
- How does the Merge Table feature work?
- Destinations
- Familiarizing with the Destinations UI
- Databases
-
Data Warehouses
- Amazon Redshift
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- Transform
- Alerts
- Account Management
- Personal Settings
- Team Settings
-
Billing
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
-
Billing FAQs
- Can I try Hevo for free?
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Account Suspension and Restoration
- Account Management FAQs
- Activate
- Glossary
- Release Notes
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Amazon RDS PostgreSQL
You can set up, operate, and scale PostgreSQL deployments in the cloud with Amazon RDS. Amazon RDS for PostgreSQL gives you access to the capabilities of the familiar PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS.
You can ingest data from your Amazon RDS PostgreSQL database using Hevo Pipelines and replicate it to a Destination of your choice.
Prerequisites
-
IP address or host name of your PostgreSQL server is available.
-
The PostgreSQL version is 9.4.15 or higher.
-
If Pipeline mode is Logical Replication:
-
PostgreSQL database instance is a master instance.
-
SELECT, USAGE, and CONNECT privileges are granted to the database user.
-
You are assigned the Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo to create the Pipeline.
Perform the following steps to configure your Amazon RDS PostgreSQL Source:
Set up Log-based Incremental Replication
Hevo supports data ingestion from PostgreSQL servers via Write Ahead Logs set at the logical level (available on PostgreSQL version 9.4). A Write Ahead Log(WAL) is a collection of log files that record information about data modifications and data object modifications made on a PostgreSQL server instance. Typically WAL is used for data replication and data recovery.
To set up log-based replication, follow these steps:
1. Create a parameter group
-
Log in to the Amazon RDS console.
-
Click Parameter groups in the left navigation pane.
-
Click Create parameter group to create a new parameter group.
-
-
In the Create parameter group page, perform the following steps:
-
Select one of the PostgreSQL versions from the Parameter Group family drop-down.
-
Select the instance with Type as DB Parameter Group.
-
Specify the Group name and Description, and then click Create.
You have successfully created a parameter group.
-
2. Configure the parameters
-
Click on the parameter you just created in Create a parameter group.
-
In the parameter group page, click Edit parameters in the top right.
-
Search and update the following parameters:
Parameter Recommended Value Description max_replication_slots
5 The maximum number of replication slots that server can support. Default value: 10. RDS recommends to set this value to at least 5 so that internal replication by RDS is not affected. max_wal_senders
5 The number of processes that can simultaneously transmit the WAL log. Default value: 10. RDS recommends to set this value to at least 5 so that the internal replication by RDS is not affected. rds.log_replication
1 The setting to enable or disable logical replication. The value of 1 is required to enable WAL at the logical level. wal_sender_timeout
0 The time, in seconds, after which PostgreSQL terminates the replication connections due to inactivity. Default value: 60 seconds. You must set the value to 0 so that the connections are never terminated and your Pipeline does not fail.
You can use the following query to check the value configured for the parameter:
show wal_sender_timeout
-
Click Save changes.
3. Apply the parameter group to your PostgreSQL database
-
In your Amazon RDS console, click Databases in the left navigation pane.
-
In the Databases page, click the DB identifier for your database, and then click Modify.
-
Select the DB parameter group you just created in Create a parameter group.
-
Set the backup retention period to at least 3 days. This setting defines the number of days for which automated backups are retained.
-
View the Summary of modifications and click Modify cluster.
Whitelist Hevo’s IP Addresses
You need to whitelist the Hevo IP addresses for your region to enable Hevo to connect to your PostgreSQL database. To do this:
1. Add inbound rules
-
Open the Amazon RDS console.
-
In the left navigation pane, click Databases (or Instances if you are using an older version).
-
In the Databases section on the right, click the DB identifier of the Amazon RDS PostgreSQL instance to configure a security group.
-
In the Connectivity & security tab, click the security group ID under Security, VPC security groups.
-
In the Actions drop-down on the top right, select Edit inbound rules. Then, in the Inbound rules tab, click Edit Inbound Rules.
-
In the Edit inbound rules page:
-
Click Add rule.
-
Add a new rule with Hevo’s IP addresses for your region to give access to the PostgreSQL instance.
-
Click Save rules.
The rules are now visible under Security groups.
-
2. Configure Virtual Private Cloud (VPC)
-
In the Connectivity & security tab, click the VPC link under VPC:
-
In the page that is displayed, click on the VPC ID.
-
In the Inbound Rules tab, ensure Allow/Deny field is set to ALLOW. Else, click Edit inbound rules to do this.
Grant Privileges to the User
Grant privileges to the database user connecting to the PostgreSQL database as follows:
-
Log into your Amazon RDS PostgreSQL database as a user with
grant
privilege. -
Enter the following commands to give accesses to the database user:
GRANT CONNECT ON DATABASE <database_name> TO <database_username>; GRANT USAGE ON SCHEMA <schema_name> TO <database_username>; GRANT SELECT ON DATABASE <database_name> TO <database_username>;
-
Alter the schema’s default privileges to grant
SELECT
privileges on tables to the database user:ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT SELECT ON TABLES TO <database_username>;
Specify Amazon RDS PostgreSQL Connection Settings
-
In the Configure your Amazon RDS PostgreSQL Source page, specify the following:
-
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
-
Database Host: The Amazon RDS PostgreSQL host’s IP address or DNS. For example, postgresql-rds-1.xxxxx.rds.amazonaws.com .
Note: For a URL-based hostname, exclude the
http://
orhttps://
part. For example, if the hostname URL is https://postgresql-rds-1.xxxxx.rds.amazonaws.com, enter postgresql-rds-1.xxxxx.rds.amazonaws.com. -
Database Port: The port on which your PostgreSQL server is listening for connections. Default value: 5432.
-
Database User: The read-only user who has the permissions to read tables in your database.
-
Database Password: The password for the read-only user.
-
Select an Ingestion Mode: The desired mode by which you want to ingest data from the Source. This section is expanded by default. However, you can collapse the section by clicking SEE LESS. Default value: Logical Replication.
The available Ingestion Modes are Logical Replication, Table, and Custom SQL. Logical Replication is the recommended ingestion mode and selected by default.
-
For ingestion mode as Table, refer to section, Object Settings for steps to configure the objects to be replicated.
-
For ingestion mode as Logical Replication, follow the steps provided in each PostgreSQL variant document to set up logical replication.
Note:
-
PostgreSQL does not support logical replication for the read replica.
-
For Custom SQL ingestion mode, all Events loaded to the Destination are billable.
-
-
Database Name: The database that you wish to replicate.
-
Connection Settings:
-
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your PostgreSQL database host to Hevo. This provides an additional level of security to your database by not exposing your PostgreSQL setup to the public. Read Connecting Through SSH.
If this option is disabled, you must whitelist Hevo’s IP addresses. Refer to the content for your PostgreSQL variant for steps to do this.
-
Use SSL: Enable it to use SSL encrypted connection. You should also enable this if you are using Heroku PostgreSQL databases. To enable this, specify the following:
-
CA File: The file containing the SSL server certificate authority (CA).
-
Client Certificate: The client public key certificate file.
-
Client Key: The client private key file.
-
-
-
Advanced Settings
-
Load Historical Data: Applicable for Pipelines with Logical Replication mode. If this option is enabled, the entire table data is fetched during the first run of the Pipeline. If disabled, Hevo loads only the data that was written in your database after the time of creation of the Pipeline.
-
Merge Tables: Applicable for Pipelines with Logical Replication mode. If this option is enabled, Hevo merges tables with the same name from different databases while loading the data to the warehouse. Hevo loads the Database Name field with each record. If disabled, the database name is prefixed to each table name. Read How does the Merge Tables feature work?.
-
Include New Tables in the Pipeline: Applicable for all Ingestion modes except Custom SQL. If enabled, Hevo automatically ingests data from tables created in the Source after the Pipeline has been built. These may include completely new tables or previously deleted tables that have been re-created in the Source. If disabled, new and re-created tables are not ingested automatically. They are added in SKIPPED state in the objects list, in the Pipeline Overview page. You can update their status to INCLUDED to ingest data. You can change this setting later. You can include these objects post-Pipeline creation to ingest data.
-
-
-
Click TEST CONNECTION. This button is enabled once you specify all the mandatory fields. Hevo’s underlying connectivity checker validates the connection settings you provide.
-
Click TEST & CONTINUE to proceed for setting up the Destination. This button is enabled once you specify all the mandatory fields.
Object Settings
Object settings must be configured if the Ingestion mode is Table.
To do this:
-
Once your respective Source connection settings are specified, as per section Specify Amazon RDS PostgreSQL Connection Settings above, select the objects to be replicated in the SELECT the Objects you want to replicate page, and then click CONTINUE.
Note: Each object represents a table in your database.
-
In the CONFIGURE SOURCE OBJECTS page, specify the query mode to be used for each selected object.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Limitations
- The data type Array in the Source is automatically mapped to Varchar at the Destination. No other mapping is currently supported.
See Also
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change |
---|---|---|
Apr-21-2023 | NA | Updated section, Specify Amazon RDS PostgreSQL Connection Settings to add a note to inform users that all loaded Events are billable for Custom SQL mode-based Pipelines. |
Mar-09-2023 | 2.09 | Updated section, Specify Amazon RDS PostgreSQL Connection Settings to mention about SEE MORE in the Select an Ingestion Mode section. |
Dec-19-2022 | 2.04 | Updated section, Specify Amazon RDS PostgreSQL Connection Settings to add information that you must specify all fields to create a Pipeline. |
Dec-07-2022 | 2.03 | Updated section, Specify Amazon RDS PostgreSQL Connection Settings to mention about including skipped objects post-Pipeline creation. |
Dec-07-2022 | 2.03 | Updated section, Specify Amazon RDS PostgreSQL Connection Settings to mention about the connectivity checker. |
Sep-07-2022 | NA | Updated section, Set up Log-based Incremental Replication to reflect the latest UI changes. |
Jul-04-2022 | NA | - Added section, Specify Amazon RDS PostgreSQL Connection Settings and Object Settings. |
Feb-07-2022 | 1.81 | Updated section, Whitelist Hevo’s IP Address to remove details about Outbound rules as they are not required. |
Jan-24-2022 | 1.80 | Removed from Limitations that Hevo does not support UUID datatype as primary key. |
Dec-20-2021 | 1.78 | Updated section, Configure the parameters. |
Sep-09-2021 | 1.71 | Updated the section, Limitations to include information about columns with the UUID data type not being supported as a primary key. |
Jun-14-2021 | 1.65 | Updated the Grant Privileges to the User section to include latest commands. |