- Introduction
- Getting Started
- Creating an Account in Hevo
- Subscribing to Hevo via AWS Marketplace
-
Connection Options
- Connecting Through SSH
- Connecting Through Reverse SSH Tunnel
- Connecting Through VPN
- Connecting Through Mongo PrivateLink
- Connecting Through AWS Transit Gateway
- Connecting Through AWS VPC Endpoint
- Connecting Through AWS VPC Peering
- Using Google Account Authentication
- How Hevo Authenticates Sources and Destinations using OAuth
- Reauthorizing an OAuth Account
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Deduplicating Data in a Data Warehouse Destination
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
- Pipeline Jobs
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert Date String to Required Format
- Convert Date to Required Format
- Convert Datetime String to Required Format
- Convert Epoch Time to a Date
- Convert Epoch Time to a Datetime
- Convert Epoch to Required Format
- Convert Epoch to a Time
- Get Time Difference
- Parse Date String to Date
- Parse Date String to Datetime Format
- Parse Date String to Time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Audit Tables
- Activity Log
-
Pipeline FAQs
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I change the Destination post-Pipeline creation?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I set a field as a primary key?
- How do I ensure that records are loaded only once?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Generic MySQL
- Google Cloud MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Generic PostgreSQL
- Google Cloud PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
- Engineering Analytics
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Amazon Ads
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo v2
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Ads
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
- Source FAQs
- Destinations
- Familiarizing with the Destinations UI
- Cloud Storage-Based
- Databases
-
Data Warehouses
- Amazon Redshift
- Amazon Redshift Serverless
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- How do I filter out specific fields before loading data?
- Transform
- Alerts
- Account Management
- Activate
- Glossary
Releases- Release 2.30 (Oct 21-Nov 18, 2024)
- Release 2.29 (Sep 30-Oct 22, 2024)
-
2024 Releases
- Release 2.28 (Sep 02-30, 2024)
- Release 2.27 (Aug 05-Sep 02, 2024)
- Release 2.26 (Jul 08-Aug 05, 2024)
- Release 2.25 (Jun 10-Jul 08, 2024)
- Release 2.24 (May 06-Jun 10, 2024)
- Release 2.23 (Apr 08-May 06, 2024)
- Release 2.22 (Mar 11-Apr 08, 2024)
- Release 2.21 (Feb 12-Mar 11, 2024)
- Release 2.20 (Jan 15-Feb 12, 2024)
-
2023 Releases
- Release 2.19 (Dec 04, 2023-Jan 15, 2024)
- Release Version 2.18
- Release Version 2.17
- Release Version 2.16 (with breaking changes)
- Release Version 2.15 (with breaking changes)
- Release Version 2.14
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
-
2022 Releases
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
-
2021 Releases
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
-
2020 Releases
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Early Access New
- Upcoming Features
PostgreSQL
PostgreSQL is a powerful, open source object-relational database system that uses the SQL language combined with many features that safely store and scale complicated data workloads. PostgreSQL is compatible with all major operating systems.
Hevo can load data from any of your Pipelines into a PostgreSQL database. Follow the steps in this page to add PostgreSQL as a Destination.
If you are new to PostgreSQL, you can follow the steps listed below to install and set up a PostgreSQL instance, and then create a database to which the Hevo Pipeline must load the data. You can also create users and grant them the necessary permissions to set up and manage the PostgreSQL databases.
The following image illustrates the key steps that you need to complete to configure PostgreSQL as a Destination in Hevo:
Prerequisites
-
The IP address or hostname and port of your PostgreSQL server are available. If you do not have these details, you can obtain them from the PostgreSQL server administrator.
-
The necessary privileges are granted on the database to your database user.
-
You are assigned the Team Collaborator, or any administrator role except the Billing Administrator role in Hevo to create the Destination.
Installing PostgreSQL (Optional)
Before you can use PostgreSQL you need to install it. It is possible that PostgreSQL is already installed in the system, either because it was included in your operating system distribution or because the system administrator already installed it.
To verify if PostgreSQL is already installed, run the following command in the Terminal window:
systemctl status postgresql
The expected response when PostgreSQL is:
-
Installed:
Active: active
-
Not installed:
Unit postgresql.service could not be found
If PostgreSQL is not already available, you can install it yourself. This does not require superuser (root) access.
PostgreSQL is available as binary packages for most common operating systems. This is the recommended way to install PostgreSQL. For an updated list of platforms providing binary packages, please visit the Download section on the PostgreSQL website at PostgreSQL: Downloads and follow the instructions for your specific platform.
Note: This section explains how to install PostgreSQL on a Linux machine running Ubuntu.
-
Open the Terminal window on your machine.
-
Enter the following commands:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list' wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - sudo apt-get update sudo apt-get -y install postgresql
This installs the PostgreSQL server on your machine and creates a postgres user.
Creating a Database Cluster and Database (Optional)
A database cluster is a single directory where all your data is stored. You can choose the location of the directory.
Perform the following steps to create (initialize) the database cluster:
-
Log in to your PostgreSQL server as the
postgres
user created in step 1. -
Enter the following command to create the database cluster. The -D option allows you to specify where the database cluster should be stored.
$ initdb -D /usr/local/pgsql/data
Note: The initdb
command attempts to create the directory you specify if it does not already exist. This fails if initdb does not have permissions to write in the parent directory. It is recommended that the PostgreSQL user own not just the data directory but its parent directory as well. If the desired parent directory does not exist, you need to create it first, using root privileges. To do this, enter the following commands:
root# mkdir /usr/local/pgsql
root# chown postgres /usr/local/pgsql
root# su postgres
$ initdb -D /usr/local/pgsql/data
When a cluster is created, a database named postgres
is created by default, which is used by utilities, users, and third-party applications. You can use this database instance to set up the Destination.
Create a Database User and Grant Privileges
1. Create a database user
Perform the following steps to create the database user in your PostgreSQL database:
-
Log in to your PostgreSQL server as a Superuser using an SQL client, such as
psql
. -
Enter the following command:
CREATE USER <user_name> WITH PASSWORD '<strong password>';
Note: Replace the placeholder values in the command given above with your own.
2. Grant privileges to the user
-
Log in to your PostgreSQL server as a Superuser using an SQL client, such as
psql
. -
Run the following commands to grant the required privileges to the database user, to create the database objects needed to load and store your data:
GRANT CREATE, CONNECT, TEMPORARY ON DATABASE <database_name> TO <database_username>; GRANT CREATE, SELECT, USAGE ON SCHEMA <schema_name> to <user_name>;
Note: Replace the placeholder values in the commands given above with your own.
Configure PostgreSQL Connection Settings
Perform the following steps to configure PostgreSQL as a Destination in Hevo:
-
Click DESTINATIONS in the Navigation Bar.
-
Click + CREATE DESTINATION in the Destinations List View.
-
On the Add Destination page, select PostgreSQL.
-
On the Configure your PostgreSQL Destination page, specify the following:
-
Destination Name: A unique name for your Destination, not exceeding 255 characters.
-
Database Host: The PostgreSQL host’s IP address or DNS.
-
Database Port: The port on which your PostgreSQL server listens for connections. Default value: 5432.
-
Database User: A user with a non-administrative role in the PostgreSQL database.
-
Database Password: The password of the database user.
-
Database Name: The name of the Destination database to which the data is loaded.
-
Schema Name (Optional): The name of the Destination database schema. Default value: public.
-
Additional Settings:
-
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your PostgreSQL database host to Hevo. This provides an additional level of security to your database by not exposing your PostgreSQL setup to the public. Read Connecting Through SSH.
If this option is disabled, you must whitelist Hevo’s IP addresses. You can refer to the PostgreSQL Source documentation pages for the steps to whitelist the IP addresses of your region for your PostgreSQL variant.
-
Use SSL: Enable this option to use an SSL-encrypted connection. Specify the following:
-
CA File: The file containing the SSL server certificate authority (CA).
-
Load all CA Certificates: If selected, Hevo loads all CA certificates (up to 50) from the uploaded CA file, else it loads only the first certificate.
Note: Select this check box if you have more than one certificate in your CA file.
-
Client Certificate: The client public key certificate file.
-
Client Key: The client private key file.
For the steps to create the required files and keys, read PostgreSQL
-
-
Sanitize Table/Column Names?: Enable this option to remove all non-alphanumeric characters and spaces in a table or column name, and replace them with an underscore (_). Read Name Sanitization.
-
-
-
Click TEST CONNECTION. This button is enabled once all the mandatory fields are specified.
-
Click SAVE & CONTINUE. This button is enabled once all the mandatory fields are specified.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Destination Considerations
-
You must disable any foreign keys defined in the target tables. Foreign keys do not allow data to be loaded until the reference table has a corresponding key defined.
-
You can replicate data for only 1594 columns in a given PostgreSQL table. Read Limits on the Number of Columns.
See Also
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change | |
---|---|---|---|
Aug-19-2024 | NA | Updated sub-section, Grant privileges to the user to update the command. | |
Mar-18-2024 | 2.21.2 | Updated section, Configure PostgreSQL Connection Settings to add information about the Load all CA certificates option. | |
Apr-25-2023 | 2.12 | Updated section, Configure PostgreSQL Connection Settings to add information that you must specify all fields to create a Pipeline. | |
Mar-23-2023 | NA | Updated section, Grant privileges to the user with new command. | |
Jan-10-2023 | 2.05 | Added sections, Installing PostgreSQL and Creating a Database Cluster and Database to help set up a PostgreSQL Destination. | |
Sep-07-2022 | NA | Updated the Prerequisites section and added the Create a Database User and Grant Privileges section. | |
Feb-21-2022 | 1.82 | Updated section, Configure PostgreSQL Connection Settings to provide support for SSL in PostgreSQL as a Destination. | |
Jul-12-2021 | NA | Updated section, Destination Considerations. |