- About Hevo
- Hevo Features
- Hevo System Architecture
- Core Concepts
- Free Trials
- Regulatory Compliance
- Hevo Support
- General FAQs
- Getting Started
- Creating an Account in Hevo
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
- Data Ingestion
- Types of Data Synchronization
- Ingestion Modes and Query Modes for Database Sources
- Ingestion and Loading Frequency
- Ingestion Frequency and Data Synchronization
- Data Ingestion Statuses
- Deferred Data Ingestion
- Handling of Primary Keys
- Handling of Updates
- Handling of Deletes
- Hevo-generated Metadata
- Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
Working with Pipelines
- Best Practices for Creating Database Pipelines
- Creating a Pipeline
- Connectivity Check for RDBMS Sources
- Scheduling a Pipeline
- Modifying a Pipeline
- Prioritizing a Pipeline
- Viewing Pipeline Progress
- Pausing and Deleting a Pipeline
- Log-based Pipelines
- Troubleshooting Data Replication Errors
- Managing Objects in Pipelines
Python Code-Based Transformations
- Supported Python Modules and Functions
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
- Convert date string to required format
- Convert date to required format
- Convert datetime string to required format
- Convert epoch time to a date
- Convert epoch time to a datetime
- Convert epoch to required format
- Convert epoch to a time
- Get time difference
- Parse date string to date
- Parse date string to datetime format
- Parse date string to time
- Examples of Python Code-based Transformations
Drag and Drop Transformations
- Special Keywords
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
- Python Code-Based Transformations
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Activity Log
- Does creation of Pipeline incur cost?
- Why are my new Pipelines in trial?
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I delete skipped objects in a Pipeline?
- Can I change the Destination post-Pipeline creation?
- How does changing the query mode affect data ingestion?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I sort Event Types listed in the Schema Mapper?
- How do I include new tables in the Pipeline?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I restart the historical load for all the objects?
- How do I set a field as a primary key?
- How can I load only filtered Events to the Destination?
- How do I ensure that records are loaded only once?
- Why do the Source and the Destination events count differ?
- Events Usage
- Free Sources
Databases and File Systems
- Data Warehouses
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
- Errors During Pipeline Creation
- Amazon RDS SQL Server
- Azure SQL Server
- Google Cloud SQL Server
- Generic SQL Server
- Troubleshooting SQL Server
- SQL Server FAQs
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Google Cloud MySQL
- Generic MySQL
- MariaDB MySQL
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
- Errors During Pipeline Creation
- MySQL FAQs
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Google Cloud PostgreSQL
- Generic PostgreSQL
- Heroku PostgreSQL
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
- Errors during Pipeline creation
- PostgreSQL FAQs
- Troubleshooting Database Sources
- Amazon S3
- Azure Blob Storage
- FTP / SFTP
- Google Cloud Storage (GCS)
- Google Drive
- Google Sheets
- Android SDK
- Writing JSONPath Expressions
REST API FAQs
- Why does my REST API token keep changing?
- Can I use a bearer authorization token for authentication?
- Does Hevo’s REST API support API chaining?
- What is the maximum payload size returned by a REST API?
- How do I split an Event into multiple Event Types?
- How do I split multiple values in a key into separate Events?
- Jira Cloud
- Finance & Accounting Analytics
- Apple Search Ads
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Google Campaign Manager
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- Instagram Business
- LinkedIn Ads
- Microsoft Advertising
- Pinterest Ads
- SendGrid Webhook
- Salesforce Marketing Cloud
- Snapchat Ads
- TikTok Ads
- Twitter Ads
- YouTube Analytics
- Product Analytics
Sales & Support Analytics
- Help Scout
- Hub Planner
- Toggl Track
- From how far back can the Pipeline ingest data?
- Can I connect to a Source not listed in Hevo?
- Can I connect a local database as a Source?
- How can I push data to Hevo API?
- How do I connect a CSV file as a Source?
- Why are my selected Source objects not visible in the Schema Mapper?
- How can I transfer Excel files using Hevo?
- How does the Merge Table feature work?
- Familiarizing with the Destinations UI
- Amazon Aurora MySQL
- SQL Server
- Connecting to a Local Database
- Limitations of using MySQL as a Destination
- Structure of Data in the Amazon Redshift Data Warehouse
- Loading Data to an Amazon Redshift Data Warehouse
- Troubleshooting Amazon Redshift Destination
- Amazon Redshift FAQs
- Azure Synapse Analytics
- Clustering in BigQuery
- Partitioning in BigQuery
- Loading Data to a Google BigQuery Data Warehouse
- Near Real-time Data Loading using Streaming
- Troubleshooting Google BigQuery
- Google BigQuery FAQs
- Hevo Managed Google BigQuery
- Structure of Data in the Snowflake Data Warehouse
- Loading Data to a Snowflake Data Warehouse
- Troubleshooting Snowflake
- Snowflake FAQs
- Amazon Redshift
- Can I move data between SaaS applications using Hevo?
- Can I change the primary key in my Destination table?
- How do I change the data type of table columns?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- How do I resolve duplicate records in the Destination table?
- How do I enable or disable deduplication of records?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- Can I load data to a specific Destination table?
- How do I filter out specific fields before loading data?
- How do I sort the data in the Destination?
- dbt™ Models
- Familiarizing with the Models UI
- Types of Models
- Key Features
- Working with SQL Models
- Previewing a Model
- Viewing the Query History
- Legacy Models
- Models FAQs
- Account Management
- Personal Settings
- Team Settings
- Pricing Plans
- Time-based Events Buffer
- Setting up Pricing Plans, Billing, and Payments
- On-Demand Purchases
- Billing Alerts
- Viewing Billing History
- Billing Notifications
- Can I try Hevo for free?
- Can I get a plan apart from the Starter plan?
- Are free trial Events charged once I purchase a plan?
- For how long can I stay on the Free plan?
- How can I upgrade my plan?
- Is there a discount for non-profit organizations?
- Can I seek a refund of my payment?
- Do ingested Events count towards billing?
- Will Pipeline get paused if I exceed the Events quota?
- Will the initial load of data be free?
- Does the Hevo plan support multiple Destinations?
- Do rows loaded through Models count in my usage?
- Is Hevo subscription environment-specific?
- Can I pause billing if I have no active Pipelines?
- Can you explain the pricing plans in Hevo?
- Where do I get invoices for payments?
- Account Suspension and Restoration
- Account Management FAQs
- Activate Concepts
- Familiarizing with the Activate UI
- Working with Activate
- Activate Warehouses
- Activate Targets
- Release Notes
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Upcoming Features
FTP / SFTP
You can load data from files in an FTP location into your Destination database or data warehouse using Hevo Pipelines.
For creating Pipelines using this Source, Hevo provides you a fully managed BigQuery data warehouse as a possible Destination. This option remains available till the time you set up your first BigQuery Destination irrespective of any other Destinations that you may have. With the managed warehouse, you are only charged the cost that Hevo incurs for your project in Google BigQuery. The invoice is generated at the end of each month and payment is recovered as per the payment instrument you have set up. You can now create your Pipeline and directly start analyzing your Source data. Read Hevo Managed Google BigQuery.
Hevo automatically unzips any Gzipped files on ingestion. Further, files are re-ingested if updated, as it is not possible to identify individual changes.
As of Release 1.66,
__hevo_source_modified_at is uploaded to the Destination as a metadata field. For existing Pipelines that have this field:
If this field is displayed in the Schema Mapper, you must ignore it and not try to map it to a Destination table column, else the Pipeline displays an error.
Hevo automatically loads this information in the
__hevo_source_modified_atcolumn, which is already present in the Destination table.
You can, however, continue to use
__hevo_source_modified_at to create transformations using the function
event.getSourceModifiedAt(). Read Metadata Column
Existing Pipelines that do not have this field are not impacted.
Configuring FTP/SFTP as a Source
To configure FTP/SFTP as a Source in Hevo:
Click PIPELINES in the Navigation Bar.
Click + CREATE in the Pipeline List View.
In the Select Source Type page, select FTP/SFTP.
In the Configure your FTP/SFTP Source page, specify the following:
Pipeline Name: A unique name for the Pipeline.
Type: Select FTP or SFTP.
Host: The IP address or the DNS for your FTP location.
Port: The port at which Hevo can connect with your FTP/SFTP Server. The default port is 21.
User: The user ID for logging in to the FTP/SFTP server.
Password: The password of the user logging in to the FTP/SFTP server. The password is optional for SFTP type connections. However, in that case, you will have to add our public key displayed on the UI to the
.ssh/authorized\_keysfile on your SFTP Server.
Path Prefix: The path Prefix for the data directory. By default, the files are listed from the root of the directory.
File Format: The format of the data file in the Source. Hevo supports the CSV, JSON, TSV, and XML file formats to ingest data.
Note: You can select only one file format at a time. If your Source data is in a different format, you can export the data to either of the supported formats, and then ingest the files.
Based on the format you select, you must specify some additional settings:
Specify the Field Delimiter. This is the character on which fields in each line are separated. For example, `\t`, or `,`).
Disable the Treat First Row As Column Headers option if the Source data file does not contain column headers. Hevo automatically creates the headers during ingestion. Default setting: Enabled. See Example below.
- Disable the Treat First Row As Column Headers option if the Source data file does not contain column headers. Hevo automatically creates the headers during ingestion. Default setting: Enabled.
XML: Enable the Create Events from child nodes option to load each node under the root node in the XML file as a separate Event.
Create Event Types from folders: Enable this option if the prefix path has subdirectories containing files in different formats. Hevo reads each subdirectory as a separate Event Type.
Note: Files lying at the prefix path (and not in a subdirectory) are ignored.
Convert date/time format fields to timestamp: Enable this option if you want to convert the date/time format within the files of selected folders to timestamp. For example, the date/time format 07/11/2022, 12:39:23 converts to timestamp 1667804963.
Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel, instead of directly connecting your FTP host to Hevo. Read Connecting Through SSH.
If this option is disabled, you must whitelist Hevo’s IP addresses to allow Hevo to connect to your FTP host.
Click TEST & CONTINUE.
Proceed to configuring the data ingestion and setting up the Destination.
|Default Pipeline Frequency||Minimum Pipeline Frequency||Maximum Pipeline Frequency|
|5 Mins||5 Mins||3 Hrs|
Read the detailed Hevo documentation for the following related topics:
Example: Automatic Column Header Creation for CSV Tables
Consider the following data in CSV format, which has no column headers.
CLAY COUNTY,32003,11973623 CLAY COUNTY,32003,46448094 CLAY COUNTY,32003,55206893 CLAY COUNTY,32003,15333743 SUWANNEE COUNTY,32060,85751490 SUWANNEE COUNTY,32062,50972562 ST JOHNS COUNTY,846636,32033, NASSAU COUNTY,32025,88310177 NASSAU COUNTY,32041,34865452
If you disable the Treat first row as column headers option, Hevo auto-generates the column headers, as seen in the schema map here:
The record in the Destination appears as follows:
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Nov-08-2022||NA||Updated section, Configuring FTP/SFTP as a Source to add information about the Convert date/time format fields to timestamp option.|
|Sep-21-2022||NA||Added a note in section, Configuring FTP / SFTP as a Source.|
|Apr-11-2022||1.86||Updated section, Configuring FTP/SFTP as a Source to reflect support for TSV file format.|
|Mar-21-2022||1.85||Removed section, Limitations as Hevo now supports UTF-16 encoding format for CSV files.|
|Oct-25-2021||NA||Added the section, Data Replication.|
|Jun-28-2021||1.66||Updated the page overview with information about
|Feb-22-2021||NA||Added the limitation about Hevo not supporting UTF-16 encoding format for CSV data.|