- Introduction
- Getting Started
- Creating an Account in Hevo
- Subscribing to Hevo via AWS Marketplace
-
Connection Options
- Connecting Through SSH
- Connecting Through Reverse SSH Tunnel
- Connecting Through VPN
- Connecting Through Mongo PrivateLink
- Connecting Through AWS Transit Gateway
- Connecting Through AWS VPC Endpoint
- Connecting Through AWS VPC Peering
- Using Google Account Authentication
- How Hevo Authenticates Sources and Destinations using OAuth
- Reauthorizing an OAuth Account
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Deduplicating Data in a Data Warehouse Destination
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
- Pipeline Jobs
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert Date String to Required Format
- Convert Date to Required Format
- Convert Datetime String to Required Format
- Convert Epoch Time to a Date
- Convert Epoch Time to a Datetime
- Convert Epoch to Required Format
- Convert Epoch to a Time
- Get Time Difference
- Parse Date String to Date
- Parse Date String to Datetime Format
- Parse Date String to Time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Audit Tables
- Activity Log
-
Pipeline FAQs
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I change the Destination post-Pipeline creation?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I set a field as a primary key?
- How do I ensure that records are loaded only once?
- Events Usage
- Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Generic MySQL
- Google Cloud MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Generic PostgreSQL
- Google Cloud PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- File Storage
- Engineering Analytics
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Amazon Ads
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo v2
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Ads
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
- Source FAQs
- Destinations
- Familiarizing with the Destinations UI
- Cloud Storage-Based
- Databases
-
Data Warehouses
- Amazon Redshift
- Amazon Redshift Serverless
- Azure Synapse Analytics
- Databricks
- Firebolt
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
-
Destination FAQs
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- How do I filter out specific fields before loading data?
- Transform
- Alerts
- Account Management
- Activate
- Glossary
Releases- Release 2.30 (Oct 21-Nov 18, 2024)
- Release 2.29 (Sep 30-Oct 22, 2024)
-
2024 Releases
- Release 2.28 (Sep 02-30, 2024)
- Release 2.27 (Aug 05-Sep 02, 2024)
- Release 2.26 (Jul 08-Aug 05, 2024)
- Release 2.25 (Jun 10-Jul 08, 2024)
- Release 2.24 (May 06-Jun 10, 2024)
- Release 2.23 (Apr 08-May 06, 2024)
- Release 2.22 (Mar 11-Apr 08, 2024)
- Release 2.21 (Feb 12-Mar 11, 2024)
- Release 2.20 (Jan 15-Feb 12, 2024)
-
2023 Releases
- Release 2.19 (Dec 04, 2023-Jan 15, 2024)
- Release Version 2.18
- Release Version 2.17
- Release Version 2.16 (with breaking changes)
- Release Version 2.15 (with breaking changes)
- Release Version 2.14
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
-
2022 Releases
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
-
2021 Releases
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
-
2020 Releases
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Early Access New
- Upcoming Features
Amazon Aurora MySQL
On This Page
- Prerequisites
- Create a Read Replica (Optional)
- Set up MySQL Binary Logs for Replication
- Whitelist Hevo’s IP Addresses
- Create a Database User and Grant Privileges
- Retrieve the Hostname and Port Number (Optional)
- Specify Amazon Aurora MySQL Connection Settings
- Object and Query Mode Settings
- Data Replication
- Source Considerations
- Limitations
- Revision History
Amazon Aurora is a drop-in replacement for MySQL that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.
You can ingest data from your Amazon Aurora MySQL database using Hevo Pipelines and replicate it to a Destination of your choice.
Prerequisites
-
The Amazon Aurora MySQL instance (not a localhost) is running.
-
The MySQL version is 5.5 or higher. You can choose the MySQL version while creating the instance.
-
If ingestion mode is BinLog:
-
The database that you are connecting is a master instance. Amazon Aurora MySQL does not support BinLog replication on read replicas.
-
-
SELECT and REPLICATION privileges are granted to the database user.
Note: We recommend that you create a database user for configuring your Amazon Aurora MySQL Source in Hevo. However, if you already have one, refer to section Grant privileges to the user.
-
Database hostname and port number of the Source instance are available.
-
You are assigned the Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo, to create the Pipeline.
Perform the following steps to configure your Amazon Aurora MySQL Source:
Create a Read Replica (Optional)
To use an existing read-replica or connect Hevo to your master database, skip to Set up MySQL Binary Logs for Replication section.
An Aurora database cluster with single-master replication has one primary database instance and up to 15 Aurora Replicas. To create a read-replica:
-
Open the Amazon RDS console.
-
In the left navigation pane, under Dashboard, click Databases (or Instances if you are using an older version).
-
In the Databases section on the right, click the DB identifier of the Aurora MySQL instance you want to replicate. For example, database-1, in the image below.
Note : The Role column denotes whether your Aurora MySQL is a Provisioned one or Serverless. If you are using a Serverless DB cluster, you can skip creating a replica.
-
In the Actions drop-down, click Add reader.
-
In the Settings panel, specify the following:
-
Aurora replica source: The master or primary database instance being replicated.
-
DB instance identifier: The replica instance you are creating.
-
-
Under Connectivity, Public access, select Publicly accessible to allow connection to the database instance via a public IP address, such as Hevo’s IP address.
-
Scroll down and click Add reader.
You can now see the Read Replica instance in the Databases section. Use this replica for any further steps and processes.
Set up MySQL Binary Logs for Replication
A binary log is a collection of log files that records information about data modifications and data object modifications made on a MySQL server instance. Typically binary logs are used for data replication and data recovery.
Hevo supports data ingestion for replication from MySQL servers via binary logs (BinLog). For this, binary logging must be enabled on your MySQL instance.
To enable binary logging for an Aurora DB cluster, follow these steps:
1. Configure the parameter
-
Open the Amazon RDS console.
-
In the left navigation pane, click Databases (or Instances if you are using an older version).
-
In the Databases section on the right, click the DB instance that you want to connect.
-
Click the Configuration tab, and then click the link text under DB cluster parameter group.
Note: If you are using the default Aurora DB cluster parameter group, then create a new DB cluster parameter group with Type as DB cluster parameter group.
-
Click Edit.
-
Update the values of the parameters as follows:
Parameter Name Value binlog_format
ROW binlog_row_image
full -
Click Save Changes.
-
Reboot the database instance that you are using to connect to Hevo, to apply the above changes.
To do this:
-
In the left navigation pane, under Dashboard, click Databases.
-
In the Databases section on the right, select the DB identifier of the Aurora MySQL instance you are replicating.
-
In the Actions drop-down, click Reboot.
Note: If you reboot a database instance with the Writer instance role in the DB cluster, all the remaining reader instances of that database in the cluster are rebooted as well.
-
On the Reboot DB Instance page, click Confirm to reboot your DB instance.
-
This confirms that binary logging is now enabled for your Aurora MySQL instance.
Read BinLog to understand how database replication works in MySQL.
The replication reference guide on MySQL’s documentation portal provides a complete reference of the options available for replication and binary logging.
2. Configure the BinLog retention period
-
Log in to your Amazon RDS MySQL database instance with
ADMIN
privileges. -
Run the following command to view the current BinLog retention period (in hours):
call mysql.rds_show_configuration;
-
If the BinLog retention period is less than 72 hours, run the following command to set it to at least 72 hours (three days).
call mysql.rds_set_configuration('binlog retention hours', 72);
Whitelist Hevo’s IP Addresses
You need to whitelist the Hevo IP address for your region to enable Hevo to connect to your Amazon Aurora MySQL database. To do this:
-
Open the Amazon RDS console.
-
In the left navigation pane, click Databases (or Instances if you’re using an older version)
-
In the Databases section on the right, click the DB identifier of the Amazon Aurora instance to configure a security group on.
-
In the Connectivity & security tab, click the link text under Security, VPC security groups.
-
On the Security groups page, select the check box for your Security group ID, and from the Actions drop-down, click Edit inbound rules.
-
On the Edit inbound rules page:
-
Click Add rule.
-
Add a new rule with Hevo’s IP address for your region to give access to the Amazon Aurora MySQL instance.
-
Click Save rules.
-
Create a Database User and Grant Privileges
1. Create a database user (Optional)
Perform the following steps to create a database user in your Amazon Aurora MySQL database:
-
Connect to your Amazon Aurora MySQL database as a root user with an SQL client tool, such as MySQL workbench.
-
Create a database user:
CREATE USER <username>@'%' IDENTIFIED BY '<password>';
Note: Replace the placeholder values in the command above with your own. For example, <username> with hevo.
2. Grant privileges to the user
The database user specified in the Hevo Pipeline must have the following global privileges:
-
SELECT
-
SUPER
or (REPLICATION CLIENT
andREPLICATION SLAVE
)
Perform the following steps to set up these privileges:
-
Connect to your Amazon Aurora MySQL database as a root user with an SQL client tool, such as MySQL workbench.
-
Grant
SELECT
andREPLICATION
privileges to the user:GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO <username>@'%';
-
Allow Hevo to access your database:
GRANT SELECT ON <database-name>.* TO <username>;
-
Grant privileges to the database user to read BinLog settings if ingestion mode is Binlog:
GRANT EXECUTE ON PROCEDURE mysql.rds_show_configuration TO '<username>'@'<hostname>';
Note: Replace the placeholder values in the commands above with your own. For example, <username> with hevo.
Retrieve the Hostname and Port Number (Optional)
Note: The Amazon Aurora MySQL hostnames start with your database name and end with rds.amazonaws.com.
For example:
Host: mysql-rds-replica-1.xxxxxxxxx.rds.amazonaws.com
Port: 3306
-
In the left navigation pane of the Amazon RDS console, click Databases (or Instances if you are using an older version).
-
In the Databases section on the right, click the DB identifier of the Amazon Aurora MySQL instance.
-
Click the Connectivity & security tab, and copy the values under Endpoint and Port as the hostname and port number. You will specify these while creating your Hevo Pipeline.
Specify Amazon Aurora MySQL Connection Settings
Perform the following steps to configure Amazon Aurora MySQL as a Source in Hevo:
-
Click PIPELINES in the Navigation Bar.
-
Click + CREATE PIPELINE in the Pipelines List View.
-
On the Select Source Type page, select Amazon Aurora MySQL.
-
On the Configure your Amazon Aurora MySQL Source page, specify the following:
-
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
-
Database Host: The MySQL host’s IP address or DNS.
The following table lists a few examples of MySQL hosts:
Variant Host Amazon RDS MySQL mysql-rds-1.xxxxx.rds.amazonaws.com Azure MySQL mysql.database.windows.net Generic MySQL 10.123.10.001 or mysql-replica.westeros.inc Google Cloud MySQL 35.220.150.0 Note: For URL-based hostnames, exclude the http:// or https:// part. For example, if the hostname URL is http://mysql-replica.westeros.inc, enter mysql-replica.westeros.inc.
-
Database Port: The port on which your Amazon Aurora MySQL server listens for connections. Default value: 3306.
-
Database User: The authenticated user who has the permissions to read tables in your database.
-
Database Password: The password for the database user.
-
Select an Ingestion Mode: The desired mode by which you want to ingest data from the Source. You can expand this section by clicking SEE MORE to view the list of ingestion modes to choose from. Default value: BinLog. The available ingestion modes are BinLog, Table, and Custom SQL.
Depending on the ingestion mode you select, you must configure the objects to be replicated. Refer to section, Object and Query Mode Settings for the steps to do this.
Note: For Custom SQL ingestion mode, all Events loaded to the Destination are billable.
-
Database Name: The database you want to load data from if the Pipeline mode is Table or Custom SQL.
-
Connection Settings
-
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your MySQL database host to Hevo. This provides an additional level of security to your database by not exposing your MySQL setup to the public. Read Connecting Through SSH.
If this option is disabled, you must whitelist Hevo’s IP addresses. Refer to the content for your MySQL variant for steps to do this.
-
Use SSL: Enable it to use SSL encrypted connection. To enable this, specify the following:
-
CA File: The file containing the SSL server certificate authority (CA).
-
Load all CA Certificates: If selected, Hevo loads all CA certificates (up to 50) from the uploaded CA file, else it loads only the first certificate.
Note: Select this check box if you have more than one certificate in your CA file.
-
-
Client Certificate: The client public key certificate file.
-
Client Key: The client private key file.
-
-
-
Advanced Settings
-
Load All Databases: Applicable for Pipelines with BinLog mode. If this option is enabled, Hevo loads the data from all databases on the selected host. Else, specify a comma-separated list of Database Names you want to load data from.
-
Load Historical Data: Applicable for Pipelines with BinLog mode. If this option is enabled, the entire table data is fetched during the first run of the Pipeline. If disabled, Hevo loads only the data that was written in your database after the time of creation of the Pipeline.
-
Merge Tables: Applicable for Pipelines with BinLog mode. If this option is enabled, Hevo merges tables with the same name from different databases while loading the data to the warehouse. Hevo loads the Database Name field with each record. If disabled, the database name is prefixed to each table name. Read How does the Merge Tables feature work?.
-
Include New Tables in the Pipeline: Applicable for all ingestion modes except Custom SQL.
If enabled, Hevo automatically ingests data from tables created in the Source after the Pipeline has been built. These may include completely new tables or previously deleted tables that have been re-created in the Source.
If disabled, new and re-created tables are not ingested automatically. They are added in SKIPPED state in the objects list, on the Pipeline Overview page. You can update their status to INCLUDED to ingest data. You can include these objects post-Pipeline creation to ingest data.
You can change this setting later.
-
-
-
Click TEST CONNECTION. This button is enabled once you specify all the mandatory fields. Hevo’s underlying connectivity checker validates the connection settings you provide.
-
Click TEST & CONTINUE to proceed for setting up the Destination. This button is enabled once you specify all the mandatory fields.
Object and Query Mode Settings
Once you have specified the Source connection settings in Step 6 above, do one of the following:
-
For Pipelines with Table or BinLog mode:
-
On the Select Objects page, select the objects you want to replicate and click CONTINUE.
Note: Each object represents a table in your database.
-
On the Configure Objects page, specify the query mode you want to use for each selected object.
-
-
For Pipelines with Custom SQL mode:
-
On the Provide Query Settings page, enter the custom SQL query to fetch data from the Source.
-
In the Query Mode drop-down, select the query mode, and click CONTINUE.
-
Data Replication
For Teams Created | Ingestion Mode | Default Ingestion Frequency | Minimum Ingestion Frequency | Maximum Ingestion Frequency | Custom Frequency Range (in Hrs) |
---|---|---|---|---|---|
Before Release 2.21 | Table | 15 Mins | 15 Mins | 24 Hrs | 1-24 |
Log-based | 5 Mins | 5 Mins | 1 Hr | NA | |
After Release 2.21 | Table | 6 Hrs | 30 Mins | 24 Hrs | 1-24 |
Log-based | 30 Mins | 30 Mins | 12 Hrs | 1-24 |
Note: The custom frequency must be set in hours as an integer value. For example, 1, 2, or 3 but not 1.5 or 1.75.
-
Historical Data: In the first run of the Pipeline, Hevo ingests all available data for the selected objects from your Source database.
-
Incremental Data: Once the historical load is complete, data is ingested as per the ingestion frequency.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Source Considerations
- MySQL does not generate log entries for cascading deletes. So, Hevo cannot capture these deletes for log-based Pipelines.
Limitations
-
Hevo only fetches tables from the MySQL database. It does not fetch other entities such as functions, stored procedures, views, and triggers.
To fetch views, you can create individual Pipelines in Custom SQL mode. However, some limitations may arise based on the type of data synchronization, the query mode, or the number of Events. Contact Hevo Support for more details.
-
During the historical load, Hevo reads table definitions directly from the MySQL database schema, whereas for incremental updates, Hevo reads from the BinLog. As a result, certain fields, such as nested JSON, are parsed differently during historical and incremental loads. In the Destination tables, nested JSON fields are parsed as a struct or JSON during historical loads, but as a string during incremental loads. This leads to a data type mismatch between the Source and Destination data, causing Events to be sidelined.
To ensure JSON fields are parsed correctly during the historical load, you can apply transformations to every table containing nested JSON fields. Contact Hevo Support for more details.
See Also
- Connecting Through Reverse SSH Tunnel
- Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance
Revision History
Refer to the following table for the list of key updates made to this page:
Date | Release | Description of Change |
---|---|---|
Nov-18-2024 | NA | Updated section, Set up MySQL Binary Logs for Replication as per the latest Amazon Aurora MySQL UI and added a sub-section to configure the BinLog retention period. |
Jul-31-2024 | NA | Updated section, Limitations to add information about Hevo reading table definitions differently during historical and incremental loads. |
Apr-29-2024 | NA | Updated section, Specify Amazon Aurora MySQL Connection Settings to include more detailed steps. |
Mar-18-2024 | 2.21.2 | Updated section, Specify Amazon Aurora MySQL Connection Settings to add information about the Load all CA certificates option. |
Mar-05-2024 | 2.21 | Added the Data Replication section. |
Nov-03-2023 | NA | Renamed section, Object Settings to Object and Query Mode Settings. |
Oct-27-2023 | NA | Updated section, Create a Database User and Grant Privileges with the latest steps. |
Jul-25-2023 | NA | Updated section, Create a Database User and Grant Privileges for more clarity. |
Jun-26-2023 | NA | Added section, Source Considerations. |
Apr-21-2023 | NA | Updated section, Specify Amazon Aurora MySQL Connection Settings to add a note to inform users that all loaded Events are billable for Custom SQL mode-based Pipelines. |
Mar-09-2023 | 2.09 | Updated section, Specify Amazon Aurora MySQL Connection Settings to mention about SEE MORE in the Select an Ingestion Mode section. |
Dec-19-2022 | 2.04 | Updated section, Specify Amazon Aurora MySQL Connection Settings to add information that you must specify all fields to create a Pipeline. |
Dec-07-2022 | 2.03 | Updated section, Specify Amazon Aurora MySQL Connection Settings to mention about including skipped objects post-Pipeline creation. |
Dec-07-2022 | 2.03 | Updated section, Specify Amazon Aurora MySQL Connection Settings to mention about the connectivity checker. |
Oct-14-2022 | NA | - Updated section, Set up MySQL Binary Logs for Replication to add information about steps for setting up the binary logs for replication. - Removed section, Source Considerations. |
Oct-13-2022 | 1.99 | Updated section, Specify Amazon Aurora MySQL Connection Settings to reflect the latest UI changes. |
Apr-25-2022 | NA | - Added section, Source Considerations. - Added a prerequisite for connecting to a master database if BinLog replication is required. |
Apr-21-2022 | 1.86 | Updated section, Specify Amazon Aurora MySQL Connection Settings. |
Feb-07-2022 | 1.81 | Updated section, Whitelist Hevo’s IP Address to remove details about Outbound rules as they are not required. |
Jan-03-2022 | 1.79 | Updated the description of the Include New Tables in the Pipeline advance setting in the Specify Amazon Aurora MySQL Connection Settings section. |
Aug-09-2021 | NA | Added a note in Step 3 of section, Create a Database User and Grant Privileges section. |
Jul-26-2021 | 1.68 | Added a note for the Database Host field. |
Jul-12-2021 | NA | Added section, Specify Amazon Aurora MySQL Connection Settings. |