- About Hevo
- Hevo Features
- Hevo System Architecture
- Core Concepts
- Free Trials
- Regulatory Compliance
- Hevo Support
- General FAQs
- Getting Started
- Creating an Account in Hevo
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Activity Log
- Data Ingestion
- Ingestion Modes and Query Modes
- Types of Data Synchronization
- Ingestion and Loading Frequency
- Ingestion Frequency and Data Synchronization
- Data Ingestion Statuses
- Deferred Data Ingestion
- Handling of Primary Keys
- Handling of Updates
- Handling of Deletes
- Hevo-generated Metadata
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
Working with Pipelines
- Best Practices for Creating Database Pipelines
- Creating a Pipeline
- Scheduling a Pipeline
- Modifying a Pipeline
- Prioritizing a Pipeline
- Viewing Pipeline Progress
- Troubleshooting Data Replication Errors
- Pausing and Deleting a Pipeline
- Log-based Pipelines
- Pipeline Objects
- Python Code-Based Transformations
Drag and Drop Transformations
- Special Keywords
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Mapping a Source Event Type with a Destination Table
- Mapping a Source Event Type Field with a Destination Table Column
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Does creation of Pipeline incur cost?
- Can I have the same Source and Destination in the Pipeline?
- Why are my new Pipelines in trial?
- Can multiple Sources connect to one Destination?
- Is my data stored after I delete a Pipeline?
- What happens if I re-create a deleted Pipeline?
- When should I pause vs delete my Pipeline?
- Why am I getting warnings while adding Pipelines?
- Why is there a delay in my Pipeline?
- Can I delete skipped objects in a Pipeline?
- How do I disable Auto Mapping?
- How do I change the ingestion frequency for tables?
- How do I change the query mode for Pipelines?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- How does the timing of Change Schedule work?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I sort Event Types listed in the Schema Mapper?
- How do I include new tables in the Pipeline?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is the historical data not getting ingested?
- How do I restart the historical load for all the objects?
- How do I set a field as a primary key?
- How can I load only filtered Events to the Destination?
- How do I ensure that records are loaded only once?
- Why do the Source and the Destination events count differ?
- How can I transfer Excel files using Hevo?
- How can I load an XML file from an S3 folder?
- Events Usage
- Free Sources
Databases and File Systems
- Data Warehouses
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
- Errors During Pipeline Creation
- Amazon RDS SQL Server
- Azure SQL Server
- Google Cloud SQL Server
- Generic SQL Server
- Troubleshooting SQL Server
- SQL Server FAQs
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
- Errors During Pipeline Creation
- MySQL FAQs
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Heroku PostgreSQL
- Generic PostgreSQL
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
- Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- Amazon S3
- File Log
- Azure Blob Storage
- FTP / SFTP
- Google Cloud Storage (GCS)
- Google Drive
- Google Sheets
- Android SDK
- Writing JSONPath Expressions
REST API FAQs
- Why does my REST API token keep changing?
- Is it possible to use a Bearer Authorization token?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Jira Cloud
- Finance & Accounting Analytics
- Apple Search Ads
- Facebook Ads
- Facebook Page Insights
- Google Campaign Manager
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- Instagram Business
- LinkedIn Ads
- Microsoft Advertising
- Pinterest Ads
- SendGrid Webhook
- Salesforce Marketing Cloud
- Snapchat Ads
- TikTok Ads
- Twitter Ads
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
- From how far back can the Pipeline ingest data?
- What is a free Source?
- Why am I unable to modify my OAuth Source connection settings?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How do I connect a CSV file as a Source?
- Which file formats are supported by file storage-based Sources?
- How does the Merge Table feature work?
- Familiarizing with the Destinations UI
- Amazon Aurora MySQL
- Microsoft SQL Server
- Connecting to a Local Database
- Structure of Data in the Amazon Redshift Data Warehouse
- Loading Data to an Amazon Redshift Data Warehouse
- Troubleshooting Amazon Redshift Destination
- Amazon Redshift FAQs
- Hevo Managed Google BigQuery
- Clustering in BigQuery
- Partitioning in BigQuery
- Loading Data to a Google BigQuery Data Warehouse
- Near Real-time Data Loading using Streaming
- Troubleshooting Google BigQuery
- Google BigQuery FAQs
- Structure of Data in the Snowflake Data Warehouse
- Loading Data to a Snowflake Data Warehouse
- Troubleshooting Snowflake
- Snowflake FAQs
- Amazon Redshift
- Can I create a Destination through API?
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- Activate Concepts
- Familiarizing with the Activate UI
- Working with Activate
- Activate Warehouses
- Activate Targets
- Account Management
- Personal Settings
- Team Settings
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
- Can I try Hevo for free?
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- When will I be charged for my subscription?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- What is a free Source?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- What are the payment methods available in Hevo?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Is my billing information removed upon account deletion?
- Account Suspension and Restoration
- Account Management FAQs
- Release Notes
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
Amplitude Analytics helps generate thorough product analytics of web and mobile application usages to help you make data driven decisions. You can replicate data from your Amplitude account to a database, data warehouse, or file storage system using Hevo Pipelines.
Note: Hevo fetches data from Amplitude Analytics in a zipped folder to perform the data query.
For creating Pipelines using this Source, Hevo provides you a fully managed BigQuery data warehouse as a possible Destination. This option remains available till the time you set up your first BigQuery Destination irrespective of any other Destinations that you may have. With the managed warehouse, you are only charged the cost that Hevo incurs for your project in Google BigQuery. The invoice is generated at the end of each month and payment is recovered as per the payment instrument you have set up. You can now create your Pipeline and directly start analyzing your Source data. Read Hevo Managed Google BigQuery.
- An active account on Amplitude with access to at least one project.
Retrieving the Amplitude API Key and Secret
Log in to your Amplitude account.
In the left navigation pane, scroll down and click Settings.
In the Org Settings page, click Projects in the left pane, and select a project whose data you would like to sync:
In the project details, copy the API Key and Secret Key shown on the screen, and save these securely:
Configuring Amplitude Analytics as a Source
Perform the following steps to configure Amplitude Analytics as the Source in your Pipeline:
Click PIPELINES in the Asset Palette.
Click + CREATE in the Pipelines List View.
In the Select Source Type page, select Amplitude Analytics.
In the Configure your Amplitude Analytics Source page, specify the following:
Pipeline Name: A unique name for your Pipeline.
API Key: The API key you retrieved from your Amplitude account.
Secret Key: The secret key you retrieved from your Amplitude account.
Historical Sync Duration: The duration for which the existing data in the Source must be ingested. Default value: 3 Months.
Click TEST & CONTINUE.
Proceed to configuring the data ingestion and setting up the Destination.
|Default Pipeline Frequency||Minimum Pipeline Frequency||Maximum Pipeline Frequency||Custom Frequency Range (Hrs)|
|1 Hr||1 Hr||24 Hrs||1-24|
The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75.
The size of the zipped folder from the Source must not exceed 4 GB, else, the data query fails with an exception, Invalid CEN header. In case of exception, Hevo automatically adjusts the ingestion duration of the historical load and the incremental data, ingesting the data in smaller zip files over multiple cycles.
Historical Data : In the first run of the Pipeline, Hevo ingests the historical data for all the objects. The data is ingested based on the historical sync duration selected while creating the Pipeline, and loaded to the Destination. Default duration: 3 Months.
Incremental Data: Once the historical load is complete, all new and updated records are synchronized with your Destination as per the Pipeline frequency.
The following is the list of tables (objects) that are created at the Destination when you run the Pipeline.
|Cohort||A list of all unique behavioural cohorts created within Amplitude|
|Event||An action that a user takes in your product. This could be anything from pushing a button, completing a level, or making a payment|
|Event Category||All event data is mapped to an Event Category entity which helps to categorise and describe live events and properties.|
|Event Type||All events are mapped to an Event Type entity which is maintained in this table.|
|Group||Each grouping of users that is created in Amplitude along with their dedicated name and description.|
|User||Any person who has logged at least one event and to whom events are attributed.|
|User Cohort||A mapping between User and the User Cohort they belong in.|
|User Group||Groups of users defined by their actions within a specific time period.|
Schema and Primary Keys
Hevo uses the following schema to upload the records in the Destination:
The User object defines each unique user through a combination of User ID, Amplitude ID, and Device ID. You can reference these three columns while making joins to the Event object.
Read the detailed Hevo documentation for the following related topics:
There is a two hour delay in the data exported from Amplitude Analytics getting loaded into your data warehouse.
For example, data sent between 8-9 PM begins to load at 9 PM and becomes available in your Destination after 11 PM, depending on the load frequency you have set.
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Jun-21-2022||1.91||- Modified section, Configuring Amplitude Analytics as a Source to reflect the latest UI changes.
- Updated the Pipeline frequency information in the Data Replication section.
|Mar-07-2022||1.83||Updated the introduction paragraph and the section,Data Replication, about automatic adjustment of ingestion duration.|
|Oct-25-2021||NA||Added the Pipeline frequency information in the Data Replication section.|
|Apr-06-2021||1.60||- Added a note to the section Schema and Primary Keys
- Updated the ERD. The