Release Version 2.30.1

Last updated on Nov 25, 2024

This is a minor release and will be integrated into the next major release. At that time, this release note will be merged into the next main release note.

In this release, we have addressed the following issues to improve the usage and functionality of our product features. To know the list of features and integrations we are working on next, read our Upcoming Features page!

The content on this site may have changed or moved since you last viewed it. As a result, some of your bookmarks may become obsolete. Therefore, we recommend accessing the latest content via the Hevo Docs website.

In this Release


New and Changed Features

User Experience

  • Enhanced User Interface for Configuring Databricks as a Destination

    • Updated the user interface for configuring Databricks as a Destination to improve user experience. Earlier, if your catalog was other than hive_metastore, you had to provide the schema name along with the catalog name in the Schema Name field on the Configuration page. With this update, the Catalog Name field has been introduced, allowing you to specify the catalog and schema names separately. This change makes the setup process more intuitive.

      Catalog_name

Fixes and Improvements

Performance

  • Handling Data Loading Issue for Snowflake Destinations

    • Fixed an issue that affected data loading in Snowflake Destinations when frequent schema changes were made to the Destination table, causing a delay in data availability or throughput. Now, as the schema changes are not propagated immediately, the data is segregated into files based on the changed and unchanged schema. The throughput issue was observed because the load job processed these files individually and in sequence, starting with the oldest file.

      After the fix, whenever a load is scheduled, all available files are picked and grouped by their schema. Each group is then loaded in a separate transaction to your Snowflake Destination, thereby increasing the throughput for the Destination table.

Sources

  • Handling Incorrect Offset Management in BigCommerce

    • Fixed an issue in the BigCommerce integration where Hevo was unable to ingest data from the Orders and Products objects during incremental loads. This problem occurred because the offset for these objects was incorrectly set to null if no records were ingested during a poll. This caused the system to fetch only records with a timestamp matching or later than the current time in the next poll, skipping records with earlier timestamps. If no records were present at the current time, the offset would remain null.

      After the fix, the system correctly sets the offset to the timestamp of the last retrieved record in the previous poll. If no records were retrieved, the offset is set to the timestamp when the previous poll started. This ensures no records are skipped and all incremental data is ingested.

  • Handling Incremental Data Mismatch Issue in AppsFlyer

    • Fixed an issue in the AppsFlyer integration where the API call did not fetch all incremental data from the Daily Reports objects. AppsFlyer updates this object every 24 hours and may have records that are modified after Hevo has moved the object’s offset. However, in the next Pipeline run, Hevo starts fetching data from the current offset. As a result, any record updated prior to this offset was missed, leading to a data mismatch between the Source and the Destination.

      After the fix, the Daily Reports object is polled for data records that were updated in the past 48 hours, which may lead to increased Event consumption by your Pipeline and affect billing. If you observe a mismatch between the Source and Destination data for the Daily Reports object, contact Hevo Support to enable this fix for your new and existing Pipelines.

Tell us what went wrong