- About Hevo
- Hevo Features
- Hevo System Architecture
- Core Concepts
- Free Trials
- Regulatory Compliance
- Hevo Support
- General FAQs
- Getting Started
- Creating an Account in Hevo
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Types of Data Synchronization
- Ingestion Modes and Query Modes for Database Sources
- Ingestion and Loading Frequency
- Ingestion Frequency and Data Synchronization
- Data Ingestion Statuses
- Deferred Data Ingestion
- Handling of Primary Keys
- Handling of Updates
- Handling of Deletes
- Hevo-generated Metadata
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
Working with Pipelines
- Best Practices for Creating Database Pipelines
- Creating a Pipeline
- Connectivity Check for RDBMS Sources
- Scheduling a Pipeline
- Modifying a Pipeline
- Prioritizing a Pipeline
- Viewing Pipeline Progress
- Pausing and Deleting a Pipeline
- Log-based Pipelines
- Troubleshooting Data Replication Errors
- Managing Objects in Pipelines
Python Code-Based Transformations
- Supported Python Modules and Functions
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
- Convert date string to required format
- Convert date to required format
- Convert datetime string to required format
- Convert epoch time to a date
- Convert epoch time to a datetime
- Convert epoch to required format
- Convert epoch to a time
- Get time difference
- Parse date string to date
- Parse date string to datetime format
- Parse date string to time
- Examples of Python Code-based Transformations
Drag and Drop Transformations
- Special Keywords
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
- Python Code-Based Transformations
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Activity Log
- Does creation of Pipeline incur cost?
- Why are my new Pipelines in trial?
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I delete skipped objects in a Pipeline?
- Can I change the Destination post-Pipeline creation?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I sort Event Types listed in the Schema Mapper?
- How do I include new tables in the Pipeline?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I restart the historical load for all the objects?
- How do I set a field as a primary key?
- How can I load only filtered Events to the Destination?
- How do I ensure that records are loaded only once?
- Why do the Source and the Destination events count differ?
- Events Usage
- Free Sources
Databases and File Systems
- Data Warehouses
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
- Errors During Pipeline Creation
- Amazon RDS SQL Server
- Azure SQL Server
- Google Cloud SQL Server
- Generic SQL Server
- Troubleshooting SQL Server
- SQL Server FAQs
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
- Errors During Pipeline Creation
- MySQL FAQs
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
- Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- Amazon S3
- Azure Blob Storage
- FTP / SFTP
- Google Cloud Storage (GCS)
- Google Drive
- Google Sheets
- Android SDK
- Writing JSONPath Expressions
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Jira Cloud
- Finance & Accounting Analytics
- Apple Search Ads
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Google Campaign Manager
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- Instagram Business
- LinkedIn Ads
- Microsoft Advertising
- Pinterest Ads
- SendGrid Webhook
- Salesforce Marketing Cloud
- Snapchat Ads
- TikTok Ads
- Twitter Ads
- YouTube Analytics
- Product Analytics
Sales & Support Analytics
- Help Scout
- Hub Planner
- Toggl Track
- From how far back can the Pipeline ingest data?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Why are my selected Source objects not visible in the Schema Mapper?
- How can I transfer Excel files using Hevo?
- How does the Merge Table feature work?
- Familiarizing with the Destinations UI
- Amazon Aurora MySQL
- SQL Server
- Connecting to a Local Database
- Limitations of using MySQL as a Destination
- Structure of Data in the Amazon Redshift Data Warehouse
- Loading Data to an Amazon Redshift Data Warehouse
- Troubleshooting Amazon Redshift Destination
- Amazon Redshift FAQs
- Azure Synapse Analytics
- Clustering in BigQuery
- Partitioning in BigQuery
- Loading Data to a Google BigQuery Data Warehouse
- Near Real-time Data Loading using Streaming
- Troubleshooting Google BigQuery
- Google BigQuery FAQs
- Hevo Managed Google BigQuery
- Structure of Data in the Snowflake Data Warehouse
- Loading Data to a Snowflake Data Warehouse
- Troubleshooting Snowflake
- Snowflake FAQs
- Amazon Redshift
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- dbt™ Models
- Familiarizing with the Models UI
- Types of Models
- Key Features
- Working with SQL Models
- Previewing a Model
- Viewing the Query History
- Legacy Models
- Models FAQs
- Account Management
- Personal Settings
- Team Settings
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
- Can I try Hevo for free?
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Account Suspension and Restoration
- Account Management FAQs
- Activate Concepts
- Familiarizing with the Activate UI
- Working with Activate
- Activate Warehouses
- Activate Targets
- Release Notes
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Amazon RDS Oracle
On This Page
- Set up Redo Logs for Replication
- Whitelist Hevo’s IP Addresses
- Create a Database User and Grant Privileges
- Retrieve the Hostname, Service ID, and Port Number
- Specify Amazon RDS Oracle Connection Settings
- Revision History
Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. Amazon RDS frees you up to focus on innovation and application development by managing time-consuming database administration tasks including provisioning, backups, software patching, monitoring, and hardware scaling.
Refer to Oracle on Amazon RDS for the supported Oracle database versions.
Oracle database version is 12c and above.
Redo Log replication is enabled, if Pipeline mode is Redo Log.
Hevo’s IP addresses are whitelisted. The database user must have
CREATE/MANAGE SECURITY GROUPSprivileges in Amazon RDS to do this.
Database hostname and port number of the Source instance are available.
You are assigned the Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo to create the Pipeline.
Perform the following steps to configure your Amazon RDS Oracle Source:
Set up Redo Logs for Replication
A redo log is a collection of log files that record information about modifications made to data objects on an Oracle server instance. Oracle LogMiner uses redo logs to track these modifications and determine the rows requiring updates in the Destination system.
To set up redo logs for replication, connect to your Oracle server and perform the following steps:
ARCHIVE log mode
You need to enable archiving for redo logs.
To do this:
Check the current log mode. This should be
SELECT LOG_MODE FROM "V$DATABASE";
ARCHIVELOGmode if the current log mode is
BEGIN rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours', 72); END;
Note: The minimum value for
archivelog retention hoursis 72. The archive log retention must be 72 hours at a minimum. This avoids any data loss that may occur due to downtimes in the Source database.
2. Enable supplemental logging
Supplemental logging ensures that the Oracle server logs all the columns of every changed Event.
Check if supplemental logging is enabled:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM "V$DATABASE"
This returns either of the following values:
YES: Represents that the supplemental logging is enabled.
IMPLICIT: Represents that the supplemental logging is disabled.
If the value returned in the previous step is
IMPLICIT, enable supplemental logging of primary key columns :
BEGIN rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','ALL'); END;
Optionally, if you’re using Oracle 12, grant the following permission:
GRANT LOGMINING TO "<user_name>";
Whitelist Hevo’s IP Addresses
You need to whitelist the Hevo IP address for your region to enable Hevo to connect to your Amazon RDS Oracle database. You can do this by creating a VPC security group and adding inbound and outbound access rules for the Hevo IP addresses. A VPC group controls access to the database instances and virtual server instances inside a VPC. To do this:
1. Create a VPC security group
Access the Amazon RDS console..
In the left navigation pane, under Dashboard, select Databases (or Instances if you are using an older version).
In the Databases section on the right, select the read replica or master database instance that you want to connect.
In the Connectivity & Security tab, click the hyperlink under Security, VPC security groups.
In the Security Groups page, click Create security group.
You can also click on an existing group, which you have used for other database instances, and modify it (or use the Security group selected by Default).
In the Create security group page, specify the following:
Security group name: An appropriate name for the security group.
Description: A brief description of the security group.
VPC ID: A unique identifier for the VPC.
2. Add inbound rules
In the Inbound Rules section:
Click Add Rule and specify the following:
Port range: The port of your Amazon RDS Oracle instance. For example, 1521.
Source: Select Custom from the drop-down and enter Hevo’s IP addresses for your region.
Click Save rule.
Add more rules for all the Hevo IPs you want to whitelist.
Create a Database User and Grant Privileges
You can log in to Oracle as a
masteruser or create a new database user for Hevo.
1. Create a database user (optional)
If a database user does not exist already, create a database user by logging in to Oracle as a
masteruser and entering the following commands:
CREATE USER "hevo" IDENTIFIED BY "password"; GRANT CONNECT, CREATE SESSION TO "hevo";
2. Grant privileges to the user
The database user you specify in the Hevo Pipeline must have the
To assign this privilege, log in to Oracle as a
masteruser or a user with
GRANT privilege and enter the following commands:
Grant SELECT privilege to all or specific tables:
# Grant access to all tables GRANT SELECT ANY TABLE TO hevo; # Grant access to specific tables GRANT SELECT ON "<schema>"."<table>" TO hevo;
Optionally, if you are using Redo Logs as the Pipeline mode, grant access to Oracle LogMiner:
GRANT SELECT ON SYS.V_$DATABASE TO hevo; GRANT SELECT ON SYS.V_$ARCHIVED_LOG TO hevo; GRANT SELECT ON SYS.V_$LOGMNR_CONTENTS TO hevo; GRANT EXECUTE ON DBMS_LOGMNR TO hevo; GRANT EXECUTE ON DBMS_LOGMNR_D TO hevo;
Now you can try connecting to Oracle using Redo Logs pipeline mode, with the user configured in the above steps.
Retrieve the Hostname, Service ID, and Port Number
Note: The RDS hostnames start with your database name and end with rds.amazonaws.com.
Host: oracle-database-1.xxxxxxxxx.rds.amazonaws.com Service ID: ORCL
In the left navigation pane of the Amazon RDS console, click Databases (or Instances if you are using an older version).
In the Databases section on the right, click the DB identifier of the Amazon RDS Oracle instance.
Click the Connectivity & Security tab, and copy the values under Endpoint and Port as the hostname and port number. You will specify these while creating your Hevo Pipeline.
Click the Configuration tab, and copy the value under DB name. You will use this DB name as the Service Name while creating your Pipeline.
Specify Amazon RDS Oracle Connection Settings
In the Configure your Amazon RDS Oracle Source page, specify the following:
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
Database Host: The Oracle database host’s IP address or DNS.
The following table lists few examples of Oracle hosts:
Variant Host Amazon RDS Oracle oracle-rds-1.xxxxx.rds.amazonaws.com Generic Oracle 192.168.2.5
Note: For URL-based hostnames, exclude the http:// or https:// part. For example, if the hostname URL is https://oracle-rds-1.xxxxx.rds.amazonaws.com, enter oracle-rds-1.xxxxx.rds.amazonaws.com.
Database Port: The port on which your Oracle server is listening for connections. Default value: 1521.
Database User: The authenticated user who has the permissions to read tables in your database.
Database Password: The password for the database user.
Select an Ingestion Mode: The desired mode by which you want to ingest data from the Source. You can expand this section by clicking SEE MORE to view the list of ingestion modes to choose from. Default value: RedoLog. The available Ingestion Modes are RedoLog, Table, and Custom SQL.
For ingestion mode as RedoLog or Table, you can configure the objects to be replicated. Refer to section, Objects and Query mode.
For Pipelines created after Release 1.96, Hevo supports the RedoLog ingestion mode for Oracle Database 19c and higher.
Note: For Custom SQL ingestion mode, all Events loaded to the Destination are billable.
Service Name: An alias of the unique Oracle database to which Hevo connects. To retrieve the Service Name, open your Oracle server in any SQL client tool as a database user with
SYSDBAprivilege and enter the following command:
select name from v$database;
Owner: The name of the schema owner to identify the schemas for ingesting the data. Data of all the schemas defined by the specified owner are ingested for replication. This is required if Ingestion mode is Table or Custom SQL.
Load All Schema: Select this toggle option to load data for all the schemas. This is applicable when Ingestion mode is Redo Log.
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your Oracle database host to Hevo. This provides an additional level of security to your database by not exposing your Oracle setup to the public. Read Connecting Through SSH.
If this option is disabled, you must whitelist Hevo’s IP addresses to allow Hevo to connect to your Oracle database host. Refer to the content for your Oracle variant for steps to do this.
Advanced Settings: ]
Load Historical Data: Applicable for Pipelines with RedoLog mode. If this option is enabled, the entire table data is fetched during the first run of the Pipeline. If disabled, Hevo loads only the data that was written in your database after the time of creation of the Pipeline.
Merge Tables: Applicable for Pipelines with RedoLog mode. If this option is enabled, Hevo merges tables with the same name from different databases while loading the data to the warehouse. Hevo loads the Database Name field with each record. If disabled, the database name is prefixed to each table name. Read How does the Merge Tables feature work?.
Include New Tables in the Pipeline: Applicable for all Ingestion modes except Custom SQL. If enabled, Hevo automatically ingests data from tables created after the Pipeline has been built. If disabled, the new tables are listed in the Pipeline Detailed View in Skipped state, and you can manually include the ones you want and load their historical data. You can include these objects post-Pipeline creation to ingest data.
You can change this setting later.
Click TEST CONNECTION. This button is enabled once you specify all the mandatory fields. Hevo’s underlying connectivity checker validates the connection settings you provide.
Click TEST & CONTINUE to proceed for setting up the Destination. This button is enabled once you specify all the mandatory fields.
Read the detailed Hevo documentation for the following related topics:
Hevo does not support the flashback method to track incremental updates.
Redo Log does not support user-defined data types. Therefore, fields with such data types are not captured in the log and are lost.
- Connecting Through Reverse SSH Tunnel
- Oracle User-Defined Types
- Redo Log
- Pipeline failure due to Redo Log expiry
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Apr-21-2023||NA||Updated section, Specify Amazon RDS Oracle Connection Settings to add a note to inform users that all loaded Events are billable for Custom SQL mode-based Pipelines.|
|Mar-09-2023||2.09||Updated section, Specify Amazon RDS Oracle Connection Settings to mention about SEE MORE in the Select an Ingestion Mode section.|
|Dec-19-2022||2.04||Updated section, Specify Amazon RDS Oracle Connection Settings to add information that you must specify all fields to create a Pipeline.|
|Dec-07-2022||2.03||Updated section, Specify Oracle Connection Settings to mention about including skipped objects post-Pipeline creation.|
|Dec-07-2022||2.03||Updated section, Specify Oracle Connection Settings to mention about the connectivity checker.|
|Feb-07-2022||1.81||Updated section, Whitelist Hevo’s IP Address to remove details about Outbound rules as they are not required.|
|Dec-06-2021||1.77||Added a See Also link to the Pipeline failure due to Redo Log expiry page.|
|Nov-22-2021||NA||Updated the Limitations section.|
|Mar-09-2021||1.58||Added section Retrieve the Hostname, Service ID, and Port Number.|