Reasons for Event Failures
On This Page
You can view the list of failed Event Types, the failure reason, and the number of Events failing for that reason for an object in the Pipeline Overview page.
The Event failures can be classified as:
Failures that you must resolve and replay manually. For example, in case of error(s) in your transformations code, you must fix the code, click DONE next to the failure message, and then click the More ( ) icon and click Replay.
Failures that you must resolve, but are auto-replayed by Hevo. For example, as soon as you correct the schema mapping error for an Event, Hevo immediately queues it up for replay. For some of these errors, either you or Hevo may replay the Event. For example, if an Event fails due to insufficient disk space, Hevo automatically replays this Event every three hours assuming that you have fixed the issue. However, you can manually increase the disk space allocation and immediately replay the Event without waiting for Hevo to replay it.
Transient failures that are automatically resolved by Hevo. For example, Hevo may park some Events as Failed Events, to reduce the data transfer load if some internal thresholds are reached. Similarly, an Event may fail and an error code may be generated for it, but if Hevo finds Auto Mapping enabled, it automatically fixes the issue, and the error is dismissed. These failures are not displayed in the Pipeline Activity page.
Unanticipated failures. These are individually investigated and resolved by Hevo.
The following table provides an exhaustive list of different Event failure reasons in Hevo. Read Resolving Event Failures for the various methods using which you can resolve these failures.
|Failure Reason||Description||Error Message||Auto-replayed?|
|CODE_DATATYPE_MISMATCH||The expected and actual data type of one or more fields do not match, or is different from what was received in the past.
Example: A 64 bit Integer value is received for a 32 bit Integer field, or a String value is found in a Boolean field.
|Data type mismatch for fields. Please correct the field value via a transformation or update the column type in Destination.||No|
|CODE_DATA_VALIDATION_FAILED||A value has failed validation as per a field’s constraints.
Example: A null value in a non-nullable field, or a String with 256 characters in a Char(128) field.
|Data validation failed for fields. Please correct the field value via transformation or update the column type in Destination.||No|
|CODE_SHORT_VARCHAR_COLUMN||Column size originally set is short.
In case of Redshift, Hevo auto-corrects this.
For example, earlier, varChar fields of 64 characters were being received, while newer Events have fields of 128 characters. If the varChar column is not expanded to accommodate these values, an error is thrown.
|Please resize the column in the Destination table.||Yes|
|CODE_SHORT_VARCHAR_KEY_COLUMN||Column size originally set is falling short. This error is for cases where Hevo does not auto-resize the column even for Destinations where auto-resizing is supported. This is for specific columns that are Keys, such as primary keys, and dist or sort keys||Please expand the column in the Destination table.
For databases, lock the table and expand the column.
For data warehouses:
- Copy the column to a new column of the required length
- Drop the old column
- Rename the new column to the original column name.
|CODE_UNIDENTIFIED_TYPE||Hevo is unable to identify the data type for the field. This should rarely happen.||Could not determine data type for fields. Please correct or drop the field via transformations||No|
|CODE_INVALID_SCRIPT||The transformation code has displayed an error.||Encountered errors in transformation code. Please correct the transformation script.||No|
|CODE_FAILED_TRANSFORM_VALUE||There is an error in the transformation script.||Failed to transform values for fields. Please correct the transformation script.||No|
|CODE_UNMAPPED_FIELD||A field in an Event Type is not mapped to a Destination table column.
To fix this error, deselect such fields (to skip them) or map them to columns through the Schema Mapper.
|Fields not mapped to Destination columns: <column_name>.||Yes|
|CODE_DEST_FIELD_NOT_FOUND||One or more Destination columns specified in the Schema Mapper are not present in the Destination table.
To fix this error, create these columns in your Destination table or un-map the fields in the Schema Mapper. If Auto-mapping is selected, Hevo creates the column in the Destination table
|Columns: <column_name> not found in Destination table. Please add the columns in the Destination table or ignore them via schema mapper.||Yes|
|CODE_INVALID_TRANSFORMED_VALUES||If Auto-mapping is enabled, this error is handled by Hevo. If you are seeing this error, you need to resolve the transformation errors.||Invalid values found in transformation for fields: <field_name>. Contact support.||No|
|CODE_UNMAPPED_SCHEMA||Evaluated in both cases - Auto Mapping on and off for schema. For example, the Pipeline is created, and Events have started coming in, but the process that creates the Auto Mapping has not yet completed.
The Event type is not mapped to a Destination table. Post mapping, a replay of the Events is triggered automatically.
A Destination table specified in the Schema Mapper is not physically present in the Destination.
To fix this error, create this table in your Destination or un-map the table in Schema Mapper.
If Auto Mapping is on, the Events will not be shown to the users as failed Events if Auto-Mapping is on as Hevo will auto-replay them.
|Event Type not mapped to a Destination table.||No|
|CODE_OVER_MAX_FIELDS||If the number of fields in the Source schema of an individual Event or the aggregated schema due to all the Events ingested so far is larger than a threshold, the Events are deferred. This is typically done because either the Destination does not support so many fields or we preemptively treat this as a sign of incorrect fan-out. The threshold depends on the Destination. The numbers are:
- Redshift: 1594
- Snowflake: 4090
- BigQuery: 4090
- Postgres: 1594
- MySQL: 4090
- MSSQL: 1018
|Source Event Type has more than <%d> fields. Please remove the fields that are not required via transformations and reset the Event Type from Schema Mapper.||No|
|CODE_NOT_NULL_VALUES_MISSING||Hevo has encountered null values for non-null columns in a Destination table.||Values for columns <column_name> in table: <table_name> are incompatible. Please update the values via transformations or update the data type of the destination column.||No|
|CODE_NO_PARTITION_KEY||No partition key(s) are configured for an Event Type in file-based Destinations, such as, S3.||No partition key found. Please add a partition key in Schema Mapper.||No|
|CODE_DESTINATION_FAILED||Hevo periodically evaluates if the Destination is reachable and the Source credentials provided are still valid. If either is untrue, Destination is marked as Failed and Events cannot be written to it.||Not able to connect to Destination.||Yes|
|CODE_DESTINATION_MAX_FIELDS||Same as CODE_OVER_MAX_FIELDS (detection happens on the Destination.)||Destination doesn’t allow more than <%d> columns in a table. Please remove the columns that are not required via transformations or ignore them in Schema Mapper.||No|
|CODE_DESTINATION_LAGGING||Applies to JDBC based Destinations, where Write actions are slower compared to the Read actions. Lot of Events get accumulated in Kafka as a result. To avoid loss of data once data limits are reached, a number of Events are sidelined and as the lag gets cleared and space is generated, these Events are included again for processing.||There is a large amount of data pending at the Destination. Estimated processing time is: <%d> minutes. Events will be played automatically in some time.||Yes|
|CODE_DESTINATION_EVENT_TOO_LARGE||Size of the Event being written to the Destination is too large (fields are within the limits)||Event is too large for the Destination. Skip a few large columns and contact Support to fix this.||No|
|CODE_INSUFFICIENT_PRIVILEGES||You do not have Write privileges on the Destination or Destination tables.||Insufficient privileges while writing to table <table_name>. Encountered error: <error_text>.||No|
|CODE_CONSTRAINT_VIOLATION||There are constraints present on the Destination table because of which data cannot be written.
For example, there is a Unique constraint present in the Destination table and Hevo is trying to write the data with the same keys again.
|Constraint violation while writing to table <table_name>. Encountered error: <error_text>.||No|
|CODE_DATA_TOO_LARGE_FOR_COLUMN||The size of the data is more than the column size||<error_text>. Please create a column of larger capacity type and map it in Schema Mapper.||No|
|CODE_INCORRECT_DESTINATION_CONFIG||There is something incorrect in the Destination setup.
For example, GCS bucket and BigQuery locations may be incompatible, or there may be insufficient disk space left in the warehouse.
|Destination configuration is incorrect: <error_text>. Please update the Destination configuration.||No|
|CODE_DESTINATION_SINK_FILE_NOT_FOUND||In case Hevo does not have permissions to read from a file store bucket (needed for copying to the Destination) or you have deleted the files from the bucket before Hevo could copy them (leaving dangling record references present with Hevo), this error is displayed.
This error is encountered very infrequently.
|One or more files to be loaded to the Destination were not found in the staging bucket. The files were either removed from the bucket or your Destination doesn’t have access to read them. Contact Support for more details.||No|
|CODE_DESTINATION_PAUSED||Destination is marked as Paused, so all Events are on hold. Once the Destination is marked as Active, Events are processed normally. Hevo checks this every 5 minutes.||Destination has been paused. Contact Support for more details.||Yes|
|CODE_TOO_MANY_PARTITIONS||In case of S3 Destinations, if the partition strategy you chose is too granular, there may be too many partitions created quickly. As a result, Events have failed and are deferred. Currently the upper limit is 1k Partitions per 100k Events.||Too many partitions created. Change the partition key from Schema Mapper.||No|
Refer to the following table for the list of key updates made to this page:
|Date||Release||Description of Change|
|Sep-21-2022||NA||Updated the page overview to organize content for better clarity.|