On This Page
Hevo delivers a user-friendly and reliable data integration platform for organizations with growing data needs. You can use Hevo to automate the process of collecting data from over 100 applications and databases, loading it to a data warehouse, and making it analytics-ready. This allows your analysts to set up faster analysis and reporting.
Some of the key features of Hevo are discussed here.
Multiple Workspaces within a Domain
A feature that allows customers to create multiple workspaces with the same domain name.
For organizations that signed up before Release 2.00: If the domain name is already registered with Hevo, customers can create a new workspace or join an existing one, while creating the account. In addition, for each Hevo region, customers can create up to five teams and respective workspaces.
For customers signing up after Release 2.00: Customers can create one workspace to explore Hevo without the hassle of maintaining multiple teams, workspaces, and pricing plans. Customers can switch the region at any time directly from the Hevo UI and create Pipelines in the region of your choice.
For customers signing up after Release 2.0, Hevo provides support for maintaining a single account across all Hevo regions, with a maximum of five workspaces. As a part of account setup, Hevo creates the first workspace, with a 30-day cool-off period. Once the cool off period is over, customers can create the next workspace. For each workspace, Hevo automatically selects the nearest region by default on the basis of the customer’s IP address. Customers can switch the region at any time directly from the Hevo UI and create a Pipeline in the region of their choice.
Each workspace has its own pricing plan, billing, and payment details that apply to all the regions associated with it. The consumed Events and any On-Demand Credits and On-Demand Events used by a workspace are also billed collectively for all Pipelines created across all regions.
ELT Pipelines with In-flight Data Formatting Capability
Hevo’s no-code ELT (Extract-Load-Transform) solution, Pipelines is a cloud data integration tool that can fetch data from your different data Sources such as SaaS, database, and file storage-based ones and load it to a database or data warehouse. ELT has emerged as a preferred technique for setting up data Pipelines over the traditional ETL process, where data loading was slow as complex complex transformations had to be done within the Pipeline. Using Hevo’s ELT data pipelines, your data teams can load high volumes of data easily and quickly and deliver access to fresh and integrated data to analysts.
As ELT technique loads raw data to your Destination, the data may not be as per the database or data warehouse table format, as different data Sources may store data in different formats. Analysts must run additional computations post loading the data to make data consistent and prepare it for analysis. At Hevo, we believe it is a better practice to format and clean the data for the warehouse before loading it. The Python-based and Drag-and-Drop Transformations in Hevo allow you to cleanse and prepare the data. Once the data is in the Destination, you can create SQL-based Models and Workflows and transform it further for analysis. Or, you can use Hevo Activate to collate this disparate data into a SaaS application in BI-ready form.
Reverse ETL Solution
Hevo’s no-code reverse ETL (Extract-Transform-Load) solution, Activate loads data from your data warehouse to various SaaS applications. This enriches your departments’ data in their respective software, like Salesforce for sales, thus providing data access and synchronization across departments and team members. If you are using Hevo Pipelines to load data from your various Sources to the Destination warehouse, then together, Hevo and Hevo Activate build a bi-directional Pipeline for your data-driven organizations.
In Hevo, you can use Draft Pipelines to iterate on Pipelines. Whenever you start to create a Pipeline but either exit the Hevo UI half-way through or need to invite a team member to complete some of the configurations for you, Hevo saves that Pipeline in Draft status. You or any member of your team can resume from where you left-off.
Workflows allow defining dependencies between your Models and Activations by allowing you to create a DAG (Directed Acyclic Graph) from within Hevo. You can create simple and complex queries on your Models and Activations to transform your data, and combine them either based on some data load conditions in the Destination or without any conditions. The data generated as a result of the Workflow is either loaded in the Destination, or the Target, depending on the configuration of the Workflow.
Historical Data Sync
Historical data is all the data available in your Source at the time of creation of the Pipeline. Hevo fetches your historical data using the Recent Data First approach, which loads the data in reverse order, getting you the latest data first.
For database Sources, Hevo fetches all the data available in the selected database(s) and objects as historical data. Hevo uses the primary keys defined in the Source objects to load this data. If primary keys are not present, you can specify a timestamp or incrementing column instead to use for loading data; however, uniqueness of the data may not be ensured.
For SaaS Sources, Hevo uses a default historical sync duration. You can change it to fetch just the amount of data you need. You can also restart the historical load for one or multiple objects if the ingested data is lost before or post-loading to the Destination.
While the aim remains to get your data as early and frequently as possible, the historical sync duration may often be decided by the Source API limits.
Hevo loads your historical data for free, which means that these Events are not counted as billable Events even if you restart the historical load for the Pipeline or specific objects at any time.
Flexible Data Replication Options
There are several options that allow you to customize the type and amount of data to ingest from your Source and load to the Destination. Even post-Pipeline creation, you can alter some of these settings to load the data you want. These settings are available during Pipeline configuration, while setting up the Source.
Sync from One or Multiple Databases
If your data is available across multiple databases in your Source, you can configure your Pipeline to load data from one or more of these databases.
Hevo deduplicates the data you load to a database Destination based on the primary keys defined in the Destination tables. If primary keys are not defined, data is directly appended. Where data warehouses have primary keys defined but are not enforceable, Hevo circumvents this challenge and ensures that only unique records are uploaded to the Destination.
Skip and Include Objects
Hevo provides you object-level control on the data you ingest from SaaS-based and database Sources. All the objects that you do not select to ingest appear as SKIPPED in the Pipeline Objects list. You can include these later, if needed. Similarly, you can skip objects you previously included. You can also skip just the historical data load for an object (if you had chosen to load historical data while creating the Pipeline), while loading all the new and incremental data. When you include an object, Hevo immediately queues it for ingestion, with historical data being ingested first. Read Including Skipped Objects Post-Pipeline Creation to know how you can include and ingest the skipped objects (if any).
Load New Tables with the Same Pipeline
The Include New Tables feature allows you to automatically ingest from any new table created in the Source or any deleted table that is re-created post-Pipeline creation. You can keep this option disabled when you create the Pipeline, but cannot modify this setting later on.
Hevo Smart Assist is the prompt, preemptive, and smart assistance built into the product that provides you complete visibility and control over your data while helping you to minimize costs. Along with this, Hevo provides you alerts about your Pipeline, Activation status, data ingestion, or any activity that requires your attention, through Email or third party applications such as Opsgenie or PagerDuty. You can also use the 24/7 Live Chat support and connect to our support team and get your queries resolved. Read Getting Alerts in Third-Party Applications to know how to enable these integrations.
Hevo enables you to maintain an On-Demand Credit to continue loading data without interruption even when your Events quota is exhausted. When the Events in your base plan and any On-Demand Events you have purchased are consumed, the On-Demand credit is used to get your Events so that your Pipelines are not paused.
You can set the On-Demand Credit limit up to a maximum of 60% of your subscribed plan’s Events.
Options such as On-Demand Events help you handle any overages so that your data loads without any interruption even when the quota assigned in your plan is consumed. Hevo supports your Pipelines for an additional 24 Hours if you have exhausted all quotas and up to the next working day if this happens over a weekend, so that your business continuity is maintained while you take due action.
Hevo offers a variety of plans to suit requirements of different scales. You can take a Monthly or Annual subscription. Further, you can choose the Events quota in your base plan and meet your requirements through On-Demand Events and On-Demand Credit or by upgrading your plan, as you see fit. Hevo also offers you a few Sources as free under its Free plan that comes with a limited Events quota. Any Events you load with these Sources are free as long as the limit is not exhausted. Any overages are billed to you.
Observability and Monitoring
Hevo offers you various graphs, counts and UI indications that provide visibility into the various aspects of the data replication, including:
Latency and speed of data ingestion and loading
Billable and historical usage details through graphs and counts
Success and failures at each replication stage
Event failures and resolution assistance
Imminent Events quota exhaustion and available actions
Hevo lends you full support to recover from any issues at the Source end and keeps retrying the data ingestion. For log-based Pipelines, Hevo restarts the historical load to read any Events that were not ingested during the downtime from the logs. Similarly, if a Destination reports a problem, Hevo retries the data load to ensure no records are lost. Hevo Support also monitors Hevo’s performance to catch any rare issue at our end.
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Dec-19-2022||2.04||Updated section, Smart Assist to mention about the third-party applications that you can integrate with Hevo to receive alerts.|
|Dec-07-2022||2.03||Updated section, Skip and Include Objects to mention about including skipped objects post-Pipeline creation.|
|Nov-07-2022||NA||- Added section, Multiple Workspaces within a Domain.
- Updated sections, Multi-region Support, Draft Pipelines, Workflows, and On-demand Credit for more clarity and detail.
|Oct-31-2022||2.00||Added section, Multi-region support.|