Release Version 2.28.1
On This Page
This is a minor release and will be integrated into the next major release. At that time, this release note will be merged into the next main release note.
In this release, we have addressed the following issues to improve the usage and functionality of our product features. To know the list of features and integrations we are working on next, read our Upcoming Features page!
The content on this site may have changed or moved since you last viewed it. As a result, some of your bookmarks may become obsolete. Therefore, we recommend accessing the latest content via the Hevo Docs website.
In this Release
Hevo Edge Pipelines (Early Access)
We are excited to announce the release of Hevo Edge, which brings significant enhancements across reliability, performance, observability, and cost efficiency. Here is what’s new:
Key Features (Edge)
Pipeline Reliability and Control
-
Enhanced CDC Connector: Experience faster and more reliable data capture that reduces delays in replicating data. This ensures your Pipelines stay up-to-date providing seamless synchronization between your Source and Destination.
-
Enhanced Error and Schema Handling: New Pipeline-level controls allow for more flexible and predictable error handling and schema evolution. This is helpful for handling a variety of replication scenarios.
-
Failure Handling: Failure handling options ensure consistency between Source and Destination.
-
Alerts on Job Failures: Alerts are triggered when jobs fail, ensuring immediate awareness and timely action to resolve issues.
Pipeline Observability
-
Detailed Job Execution Insights: Filter and view granular details of each job run, including duration and end-to-end latency for better performance understanding.
-
Access Session Log for Job: Quickly download session logs for individual jobs, which provide detailed insights into job execution and facilitate troubleshooting.
-
Display Objects Offset and Latency: You can now track the offset of your objects to monitor data sync progress and identify latency between Source and Destination.
-
Source Monitoring: Track PostgreSQL Write-Ahead Logging (WAL) disk usage with built-in monitoring and alerts. This feature helps you prevent disk capacity issues before they impact Pipeline performance.
Pipeline Performance
-
Upto 4x faster performance for loading historical data.
-
Experience 10x speed improvements for incremental data runs.
-
Predictable performance with isolated runtime for each job.
Cost Efficiency
-
Predictable Destination Loads: Deterministic Destination loads lead to predictable warehouse costs.
-
No Metadata Query Costs: Avoid costs from metadata queries.
-
Accurate Data Types: Prevents unnecessary data type changes at the Destination due to deterministic data type inference, minimizing data processing overhead and ensuring consistency.
-
Cost Savings with Append-Only Mode: The append-only mode provides substantial cost savings compared to the traditional merge method.
With Hevo Edge, we are providing a more robust and efficient platform, designed to elevate the reliability and performance of your data integration processes.
Request Early Access to Edge Pipelines and be among the first to leverage Hevo’s cutting-edge capabilities!
Limitations (Edge)
Pipelines
-
Limited Edit Functionality: Currently, in the Pipeline configuration, you can only edit the object, field, and sync frequency. More advanced edit options are not yet available.
-
WAL Slot Monitoring Threshold: Users cannot modify the Write-Ahead Logging (WAL) slot monitoring threshold. If you want to disable or make any changes, you must contact Hevo support.
-
Object Limit: Each Pipeline currently supports up to 25,000 objects. If your Pipeline has more than 25000 objects, you must contact Hevo support.
-
Historical Job Progress: Currently, historical sync jobs remain in progress without displaying any updates or statistics on the job details page until they are completed.
-
Standard Pipelines Migration Not Supported: Currently, migrating existing Standard Pipelines to Edge Pipelines is not possible.
-
Delayed Incremental Ingestion: Incremental ingestion only starts after the historical load is complete for all tables in the Pipeline. Sometimes, this process may take a lot of time, during which the Write-Ahead Log (WAL) slot size can increase significantly. You can enable WAL monitoring to avoid database downtime.
-
Case-Sensitive Identifiers in Snowflake: Currently, Edge Pipelines create case-sensitive tables and columns in Snowflake. To avoid errors, users must use quoted identifiers when querying data in Snowflake.
-
High Latency During Data Spikes: The polling mechanism ingests all available data from the logs at the start of each poll before loading data to the Destination. This can result in high end-to-end (E2E) latencies when the Source database experiences large data spikes.
-
Features Not Supported:
-
Custom mapping of fields.
-
Python-based Transformations.
-
Loading data at a specific time.
-
Existing SQL and dbt™ Models are not compatible.
-
Sources - PostgreSQL
-
Unsupported Data Types: Currently, geometry and geography data types are not supported for Sources.
-
PostgreSQL does not support logical replication on read replicas.
Upcoming Features (Edge)
-
New Source: Support for MySQL and Amazon RDS Oracle as a Source will be introduced. This integration will allow you to seamlessly connect, extract, and replicate data from these databases.
-
New Destination: Integration with Amazon S3 as a Destination. This feature lets you replicate your data directly into Amazon S3, leveraging its scalable storage solutions for further analysis.
-
5-Minute Schedule: Pipelines will support scheduling syncs as frequently as every 5 minutes.
-
Improved Job Progress Visibility: Historical jobs will show progress updates, offering better observability during job execution.
-
Alerts: Ability to configure alert preferences, subscribe for Pipeline alerts created by another user, and view alerts in the dashboard.
-
Public API Availability: Manage your Pipelines programmatically using the public APIs. This provides you with the create, read, update, and delete operations for your data Pipelines.
-
Set Data Replication Type: Ability to configure the replication process to skip historical data and replicate only incremental data to the Destination, streamlining data sync and reducing processing time.
-
Support for Key Pair Authentication in Snowflake: This feature allows you to connect to your Snowflake data warehouse Destination using a key pair for authentication, making the connection more secure. This method allows you to provide a private key instead of a database password while configuring your Snowflake Destination.
New and Changed Features
Sources
-
Updated Default Ingestion Frequency for Amazon Ads, Facebook Ads, Google Ads, Microsoft Ads, and Twitter Ads
- The default ingestion frequency for Pipelines created with the Amazon Ads, Facebook Ads, Google Ads, Microsoft Ads, and Twitter Ads Sources has been updated to 12 hours. This change helps manage resources effectively when high volumes of data are being ingested from these Sources.
Fixes and Improvements
Destinations
-
Handling Infinite Schema Refresh Attempts for All Destinations
-
Fixed an issue where Hevo repeatedly triggered the schema refresh job every 15 minutes for all Destinations, leading to excessive queries and higher costs for users when the refresh job failed. Now, after 10 consecutive failures, the system will pause the schema refresh job for 24 hours before trying again.
The fix is currently implemented for teams only in the AU (Australia) region to monitor its impact. Based on the results, it will be deployed to teams across all regions in a phased manner and does not require any action from you.
-
Pipelines
-
Handling of Error Messages in Pipelines
- Fixed an issue where users other than the authorized user within the same team encountered an error while changing the Pipeline schedule, even though the scheduling was successful. With this fix, Hevo no longer verifies the authorized user while Pipeline schedule is updated, eliminating unnecessary error messages.
Sources
-
Handling Data Mismatch Issues in Facebook Ads
- Fixed an issue whereby the API call to fetch the refresher data returned incorrect data when users were in a time zone different from UTC. This issue occurred because the refresher job compared UTC time with the user’s local time to calculate the offset, leading to data being retrieved for only one day instead of 30 days. With this fix, the refresher job now accurately calculates the offset, ensuring correct data in the Destination.
-
Handling of Data Ingestion Issues in HelpScout Source
- Fixed an issue where data ingestion was getting stuck, stopping new data from being retrieved. This issue occurred when all records on a page had the same timestamp and the page had reached its size limit, preventing the page number from incrementing. With this fix, the page number is now correctly incremented based on the page size, ensuring all records are processed.
-
Handling Tasks Object Failure in Onfleet Source
-
Fixed an issue where Hevo was unable to ingest data from the Tasks object due to a missing API endpoint in Onfleet’s system. With this fix, Hevo has updated the API endpoint to ensure seamless data ingestion. Additionally, pagination has been implemented to optimize the data retrieval process.
This fix applies to all new Pipelines created after Release 2.28.1 for teams only in the US and US2 (United States) regions to monitor its impact. Based on the results, it will be deployed to teams across all regions in a phased manner and does not require any action from you for new Pipelines. To enable this fix for existing Pipelines, contact Hevo Support.
-