- Introduction
- Getting Started
- Creating an Account in Hevo
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert Date String to Required Format
- Convert Date to Required Format
- Convert Datetime String to Required Format
- Convert Epoch Time to a Date
- Convert Epoch Time to a Datetime
- Convert Epoch to Required Format
- Convert Epoch to a Time
- Get Time Difference
- Parse Date String to Date
- Parse Date String to Datetime Format
- Parse Date String to Time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Activity Log
-
Pipeline FAQs
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I change the Destination post-Pipeline creation?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I set a field as a primary key?
- How do I ensure that records are loaded only once?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
-
Engineering Analytics
- Apify
- Asana
- Buildkite
- GitHub
-
Streaming
- Android SDK
- Kafka
-
REST API
- Writing JSONPath Expressions
-
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Webhook
- GitLab
- Jira Cloud
- Opsgenie
- PagerDuty
- Pingdom
- QuickBooks Time
- Trello
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Amazon Ads
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Ads
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
-
Source FAQs
- From how far back can the Pipeline ingest data?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Why are my selected Source objects not visible in the Schema Mapper?
- How does the Merge Table feature work?
- Destinations
- Familiarizing with the Destinations UI
- Databases
-
Data Warehouses
- Amazon Redshift
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- Transform
- Alerts
- Account Management
- Personal Settings
- Team Settings
-
Billing
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
-
Billing FAQs
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Account Suspension and Restoration
- Account Management FAQs
- Activate
- Glossary
- Release Notes
- Release Version 2.18
- Release Version 2.17
- Release Version 2.16
- Release Version 2.15
- Release Version 2.14
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Google Cloud PostgreSQL
Starting Release 2.18.2, Hevo will stop supporting XMIN as a query mode for all variants of the PostgreSQL Source. As a result, you will not be able to create new Pipelines using this query mode. This change does not affect existing Pipelines. However, you will not be able to change the query mode to XMIN for any objects currently ingesting data using other query modes.
Google Cloud PostgreSQL is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on the Google Cloud platform.
You can ingest data from your Google Cloud PostgreSQL database using Hevo Pipelines and replicate it to a Destination of your choice.
Prerequisites
-
IP address or host name of your PostgreSQL server is available.
-
The PostgreSQL version is 9.4 or higher.
-
SELECT, USAGE, and CONNECT privileges are granted to the database user.
-
You are assigned the Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo to create the Pipeline.
-
If the Pipeline mode is Logical Replication:
-
Log-based incremental replication is enabled.
-
PostgreSQL database instance is a master instance.
Note: PostgreSQL does not support logical replication for the read replica.
-
Perform the following steps to configure your Google Cloud PostgreSQL Source:
Set up Log-based Incremental Replication (Optional)
Note: If you are not using Logical Replication, skip this step.
PostgreSQL (version 9.4 and above) supports logical replication by writing additional information to its Write Ahead Logs (WALs).
To configure logical replication:
-
Log in to Google Cloud SQL to access your database instance.
-
Click the More (
) icon next to the PostgreSQL instance and click Edit.
-
Scroll down to the Flags section.
-
Click the drop-down next to Flags and click ADD A DATABASE FLAG.
-
In the New database flag dialog window, click the arrow in the Choose a flag bar and type the flag name in the Filter bar.
-
Click the flag name to select it. Select an appropriate value for the flag from the drop-down or by typing it.
-
Click DONE and proceed to the next step to add all the required flags. Skip to step 9 if you have finished adding the flags.
-
Click ADD A DATABASE FLAG, and then, repeat steps 5-7 to add the following flags and the specified value:
Flag Name Value Description cloudsql.enable_pglogical
On The setting to enable or turn off the pglogical
extension. Default value: On.cloudsql.logical_decoding
On The setting to enable or turn off logical replication. Default value: On. max_replication_slots
10 The number of clients that can connect to the server. max_wal_senders
10 The number of processes that can simultaneously transmit the WAL log. wal_sender_timeout
0 The time, in seconds, after which PostgreSQL terminates the replication connections due to inactivity. Default value: 60 seconds.
You must set the value to 0 so that the connections are never terminated and your Pipeline does not fail. -
Click SAVE.
-
In the confirmation dialog, click SAVE AND RESTART.
Once the instance restarts, you can view the configured settings under the Flags section.
Whitelist Hevo’s IP Addresses
You need to whitelist the Hevo IP addresses for your region to enable Hevo to connect to your PostgreSQL database. To do this:
-
Log in to the Google Cloud SQL Console and click on your PostgreSQL instance ID.
-
In the left navigation pane of the <PostgreSQL Instance ID> page, click Connections.
-
In the Connections pane, click the NETWORKING tab, and scroll down to the Authorized networks section.
-
Click ADD A NETWORK, and in New Network, specify the following:
-
Name: A name to describe the region for Hevo’s IP address.
-
Network: The IP address(es) of your Hevo region in CIDR notation.
-
-
Click DONE, and then, repeat the step above for all the network addresses you want to add.
-
Click Save.
You can view the networks that you have authorized in the Summary tab under Security.
Create a Replication User and Grant Privileges
While using logical replication in Google Cloud PostgreSQL, the user must have the cloudsqlsuperuser
role. This role is needed to run the CREATE EXTENSION
command.
Create a PostgreSQL user with REPLICATION
privileges as follows:
-
Log in to your PostgreSQL database using any SQL client like DataGrip as a
super admin
and run the following commands:CREATE USER replication_user WITH REPLICATION IN ROLE cloudsqlsuperuser LOGIN PASSWORD 'secret';
Alternatively, set this attribute for an existing user as follows:
ALTER USER existing_user WITH REPLICATION;
-
Enter the following commands to provide access to the database user:
GRANT CONNECT ON DATABASE <database_name> TO <database_username>; GRANT USAGE ON SCHEMA <schema_name> TO <database_username>;
-
Alter the schema’s default privileges to grant
SELECT
privileges on tables to the database user:ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT SELECT ON TABLES TO <database_username>;
Specify Google Cloud PostgreSQL Connection Settings
-
In the Configure your Google Cloud PostgreSQL Source page, specify the following:
-
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
-
Database Host: The Google Cloud PostgreSQL host’s IP address or DNS. For example, 35.220.150.0.
-
Database Port: The port on which your Google Cloud PostgreSQL server listens for connections. Default value: 5432.
-
Database User: The read-only user who has the permission to read tables in your database.
-
Database Password: The password for the read-only user.
-
Select an Ingestion Mode: The desired mode by which you want to ingest data from the Source. This section is expanded by default. However, you can collapse the section by clicking SEE LESS. Default value: Logical Replication.
The available ingestion modes are Logical Replication, Table, and Custom SQL. Logical Replication is the recommended ingestion mode and is selected by default.
Depending on the ingestion mode you select, you must configure the objects to be replicated. Refer to section, Object and Query Mode Settings for the steps to do this.
Note: For the Custom SQL ingestion mode, all Events loaded to the Destination are billable.
-
Database Name: The name of an existing database that you wish to replicate.
-
Schema Name (Optional): The schema in your database that holds the tables to be replicated. Default value: public.
Note: The Schema Name field is displayed only for Table and Custom SQL ingestion modes.
-
Connection Settings:
-
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your PostgreSQL database host to Hevo. This provides an additional level of security to your database by not exposing your PostgreSQL setup to the public. Read Connecting Through SSH.
If this option is turned off, you must whitelist Hevo’s IP addresses.
-
Use SSL: Enable this option to use an SSL-encrypted connection. Specify the following:
-
CA File: The file containing the SSL server certificate authority (CA).
-
Client Certificate: The client’s public key certificate file.
-
Client Key: The client’s private key file.
-
-
-
Advanced Settings
-
Load Historical Data: Applicable for Pipelines with Logical Replication mode.
If enabled, the entire table data is fetched during the first run of the Pipeline.
If disabled, Hevo loads only the data that was written in your database after the time of creation of the Pipeline. -
Merge Tables: Applicable for Pipelines with Logical Replication mode.
If enabled, Hevo merges tables with the same name from different databases while loading the data to the warehouse. Hevo loads the Database Name field with each record.
If disabled, the database name is prefixed to each table name. Read How does the Merge Tables feature work? -
Include New Tables in the Pipeline: Applicable for all ingestion modes except Custom SQL.
If enabled, Hevo automatically ingests data from tables created in the Source after the Pipeline has been built. These may include completely new tables or previously deleted tables that have been re-created in the Source.
If disabled, new and re-created tables are not ingested automatically. They are added in SKIPPED state in the objects list, in the Pipeline Overview page. You can include these objects post-Pipeline creation to ingest data.You can change this setting later.
-
-
-
Click TEST CONNECTION. This button is enabled once you specify all the mandatory fields. Hevo’s underlying connectivity checker validates the connection settings you provide.
-
Click TEST & CONTINUE to proceed for setting up the Destination. This button is enabled once you specify all the mandatory fields.
Object and Query Mode Settings
Once you have specified the Source connection settings in Step 4 above, do one of the following:
-
For Pipelines with Table or Logical Replication mode:
-
In the Select Objects page, select the objects you want to replicate and click CONTINUE.
Note: Each object represents a table in your database.
-
In the Configure Objects page, specify the query mode you want to use for each selected object.
Note: Hevo selects XMIN as the default query mode for all PostgreSQL Sources.
-
-
For Pipelines with Custom SQL mode:
-
In the Provide Query Settings page, enter the custom SQL query to fetch data from the Source.
-
In the Query Mode drop-down, select the query mode, and click CONTINUE.
-
Additional Information
Read the detailed Hevo documentation for the following related topics:
Source Considerations
-
If you add a column with a default value to a table in PostgreSQL, entries with it are created in the WAL only for the rows that are added or updated after the column is added. As a result, in the case of log-based Pipelines, Hevo cannot capture the column value for the unchanged rows. To capture those values, you need to:
-
Restart the historical load for the respective object.
-
Run a query in the Destination to add the column and its value to all rows.
-
-
Google Cloud PostgreSQL does not support logical replication on read replicas. To enable log-based replication, you must select the master database instance.
Limitations
-
The data type Array in the Source is automatically mapped to Varchar at the Destination. No other mapping is currently supported.
-
Hevo does not support data replication from foreign tables, temporary tables, and views.
-
If your Source data has indexes (indices) and constraints, you must recreate them in your Destination table, as Hevo does not replicate them from the Source. It only creates the existing primary keys.
-
Hevo does not set the
__hevo_marked_deleted
field to True for data deleted from the Source table using the TRUNCATE command. This could result in a data mismatch between the Source and Destination tables.
See Also
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change |
---|---|---|
Nov-03-2023 | NA | Renamed section, Object Settings to Object and Query Mode Settings. |
Oct-03-2023 | NA | Updated sections: - Set up Log-based Incremental Replication and Whitelist Hevo’s IP Addresses to reflect the changed Google Cloud SQL UI, - Specify Google Cloud PostgreSQL Connection Settings to describe the schema name displayed in Table and Custom SQL ingestion modes, - Source Considerations to add information about logical replication not supported on read replicas, and - Limitations to add limitations about data replicated by Hevo. |
Sep-19-2023 | NA | Updated section, Limitations to add information about Hevo not supporting data replication from certain tables. |
Jun-26-2023 | NA | Added section, Source Considerations. |
Apr-21-2023 | NA | Updated section, Specify Google Cloud PostgreSQL Connection Settings to add a note to inform users that all loaded Events are billable for Custom SQL mode-based Pipelines. |
Mar-09-2023 | 2.09 | Updated section, Specify Google Cloud PostgreSQL Connection Settings to mention about SEE MORE in the Select an Ingestion Mode section. |
Dec-19-2022 | 2.04 | Updated section, Specify Google Cloud PostgreSQL Connection Settings to add information that you must specify all fields to create a Pipeline. |
Dec-07-2022 | 2.03 | Updated section, Specify Google Cloud PostgreSQL Connection Settings to mention about including skipped objects post-Pipeline creation. |
Dec-07-2022 | 2.03 | Updated section, Specify Google Cloud PostgreSQL Connection Settings to mention about the connectivity checker. |
Jul-04-2022 | NA | - Added sections, Specify Google Cloud PostgreSQL Connection Settings and Object Settings. |
Mar-07-2022 | 1.83 | Updated the section, Prerequisites with a note about the logical replication. |
Jan-24-2022 | 1.80 | Removed from Limitations that Hevo does not support UUID datatype as primary key. |
Dec-20-2021 | 1.78 | Updated section, Set up Log-based Incremental Replication. |
Sep-09-2021 | 1.71 | - Updated the section, Limitations to include information about columns with the UUID data type not being supported as a primary key. - Added WAL replication mode in the Prerequisites section. - Replaced the section Grant Privileges to the User with Create a Replication User and Grant Privileges. |
Jun-14-2021 | 1.65 | Updated the Grant Privileges to the User section to include latest commands. |