Edge Release Notes - July 09, 2025
On This Page
The content on this site may have changed or moved since you last viewed it. As a result, some of your bookmarks may become obsolete. Therefore, we recommend accessing the latest content via the Hevo Docs website.
To know the complete list of features available for early adoption before these are made generally available to all customers, read our Early Access page.
In this Release
New and Changed Features
Data Ingestion
-
Run Incremental and Historical Ingestions in Parallel
- Earlier, only one ingestion job, either incremental or historical, could run at a time, leading to potential delays in data processing. Now, both jobs can run simultaneously, with incremental data loading immediately after the completion of the historical load.
Data Loading
-
Support for Changing the Load Mode at the Object Level
-
Introduced the ability to change the load mode for objects during and post-Pipeline creation. Previously, the load mode could only be set at the Pipeline level, which was applied to all objects with no option to change it later.
With this enhancement, you can now change the load mode for individual objects on the Object Configuration page, providing more flexibility and control over how your data is loaded. For more information, read Change the load mode for an object.
-
Destinations
-
Edit Snowflake Key Pair Authentication
- Hevo now supports editing your key pair authentication, making the connection more secure. Read Modifying Snowflake Destination Configuration.
-
Support for Amazon Redshift as a Destination
- Integrated Amazon Redshift as a data warehouse Destination for creating Pipelines. Amazon Redshift enables scalable data storage and high-performance analytics, making it ideal for large-scale data processing and business intelligence.
-
Unquoted Identifiers in Table and Column Names in Snowflake
- A Pipeline configuration option to manage quoted identifiers. This will allow users to enable or disable quotes around table and column names, ensuring compatibility with their existing Snowflake DDL queries.
Pipelines
-
Managing Alert Recipients
- Enhanced the Alerts system in Hevo Edge, allowing users with administrator and collaborator roles in Hevo to add recipients. Email addresses and or Slack channels can be added as recipients and subscribed to Pipelines to receive notifications from alerts that may require their attention.
-
Pipeline Configuration Editing
-
You can now edit the following fields in your Pipelines:
-
Pipeline Name: Rename Pipelines as required.
-
Load Mode: Switch between Append and Merge modes based on your data handling requirements.
-
Failure Handling Policy: Adjust how failures are managed during Pipeline execution.
-
Schema Evolution: Enable or disable schema changes during data ingestion.
-
WAL Monitoring, SSH, and SSL for Sources: Configure monitoring and secure connections for Source databases.
-
Source and Destination Names: Update source and destination configurations as required.
-
-
-
Schema Evolution
- Earlier, adding a new column or object to a Pipeline triggered a drop and load operation, replacing all existing data in the Destination. Now, when a new column or object is added, it is seamlessly integrated into the Destination without dropping any existing data. The operation modifies the existing data for changes, providing an option to re-create the object and reload its historical data. The incremental data for the new column or object will be loaded during the next incremental run.
-
Upgrade: Standard to Edge
-
You can now upgrade your PostgreSQL Standard Pipelines to Snowflake. However, this upgrade currently comes with certain limitations:
-
Unsupported Data Types: INTERNAL data types are not supported and are disabled by default.
-
Manual Migration: Guidance from the Hevo support team is required to ensure a smooth transition.
-
Transformations: Pipelines containing transformations cannot be migrated directly. Users will need to recreate such Pipelines in Edge.
-
Data Loading: Existing tables are backed up, truncated, and loaded during the upgrade.
-
-
Sources
-
Support for Amazon Aurora MySQL as a Source
- Integrated Amazon Aurora MySQL as a Source for creating Pipelines. Amazon Aurora MySQL offers high performance and availability while being cost-effective and easy to manage, making it ideal for reliable data ingestion and replication.
-
Support for Azure PostgreSQL
- Integrated Azure PostgreSQL as a Source for creating Pipelines. Azure PostgreSQL is fully managed, enterprise-ready community PostgreSQL database as a service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability.
-
Support for SQL Server Change Tracking as a Source
-
Integrated SQL Server Change Tracking as a Source for creating Pipelines. SQL Server is a relational database management system known for its scalability, security, and performance, making it suitable for a wide range of enterprise applications. The Change Tracking feature in SQL Server enables efficient data replication from databases by tracking and capturing changes made to them. This reduces the number of queries required to run on the database, optimizing performance.
The supported variants for this Source are SQL Server Change Tracking and Amazon RDS SQL Server Change Tracking.
-
-
WAL Slot Monitoring in PostgreSQL
- Users can now modify the Write-Ahead Logging (WAL) slot monitoring threshold.
-
Optimized Large Transaction Handling in Oracle
- Improved handling and expanded support for SQLite, including better management of large transactions in Oracle. The latest updates have been tested to support up to 50 million records per transaction.