Salesforce

Last updated on Mar 05, 2024

Salesforce is a cloud computing Service as a Software (SaaS) company that allows you to use cloud technology to connect more effectively with customers, partners, and potential customers.

Hevo uses Salesforce’s Bulk API to replicate the data from your Salesforce applications to the Destination database or data warehouse. To enable this, you need to authorize Hevo to access data from the relevant Salesforce environment.

Salesforce Environments

Salesforce allows businesses to create accounts in multiple environments, such as:

  • Production: This environment holds live customer data and is used to actively run your business. A production organization is identified by URLs starting with https://login.salesforce.com.

  • Sandbox: This is a copy of your production organization. You can create multiple sandbox environments for different purposes, such as one for development and another for testing. Working in the sandbox eliminates risk of compromising your production data and applications. A sandbox is identified by URLs starting with https://test.salesforce.com.


Source Considerations

  • Derived fields (or calculated fields) are fields that derive their value from other fields or formulas. Derived fields are not updated in the Destination during incremental loads even if their value changes due to a change in the formula or the original field.

    In Salesforce, whenever any change occurs in an object, its SystemModStamp timestamp field is updated. Hevo uses this SystemModStamp field to identify Events for incremental ingestion. In case of derived fields, a change in the formula or the original field does not affect the object’s SystemModStamp value even though the derived field’s value may change. Due to this, such objects are not picked up in the incremental load. However, if another field in the object is updated alongside, then the incremental load picks up the derived field updates also.

    As a workaround, Hevo runs the historical load every 20 days by default for objects containing derived fields. You can contact Hevo Support to change this historical load frequency. You can also restart the historical load for the object manually. If the object was created after Pipeline creation, you need to restart the historical load at the Pipeline level.

  • Hevo cannot ingest incremental data for back-dated records, since Salesforce does not update the SystemModTimestamp column for these records. Hevo uses this column to identify Events for incremental ingestion. As a workaround, you can restart the historical load for the object.

    Back-dated records have a created timestamp that is earlier than the current date. These are a type of audit field, which Salesforce allows customers to edit once. For example, on August 15, you can create a record in Salesforce with a created_ts of July 31.

  • There may be a data count mismatch at the Destination due to records deleted in your Salesforce account. When a record from a replicable object is deleted in Salesforce, the IsDeleted column for it is set to True. Salesforce moves the deleted records to the Salesforce Recycle Bin, and they are not displayed in the Salesforce dashboard. Now, when Hevo starts the data replication from your Source, using either the Bulk APIs or REST APIs, it also replicates data from the Salesforce Recycle Bin to your Destination. As a result, you might see more Events in your Destination than the Source.

  • If you pause a Pipeline for more than 15 days, Hevo cannot replicate the deleted data, if any, to your Destination. This is because Salesforce retains deleted data in its Recycle Bin for 15 days. Also, Salesforce purges the oldest records in the Recycle Bin every two hours if their count exceeds the limit for your organization. The record limit is 25 times your organization’s storage capacity. Therefore, to correctly capture the deleted data, you must run the Pipeline within two hours of deleting the data in Salesforce.

  • The maximum number of Events that can be ingested per day is calculated based on your organization’s quota of batches.

    Suppose your organization is allocated a daily quota of 15000 batches per 24 hours, and each batch can contain a maximum of 10000 Events.

    Then, the daily Event consumption is calculated as follows:

    • The number of batches created per Object (X) = Number of Events for the Object/10000.

      Note: This value, X is rounded off to the next integer.

    • And,

      The total number of batches created across all Objects in the Pipeline (Y) = Sum of the number of batches created for each Object (ΣX).

      This number, Y is the number of batches that are submitted in one run of the Pipeline. This number may vary in each run of the Pipeline and is calculated as follows:

      The number of Pipeline runs in a day (Z) = 24/Ingestion frequency (in hours).

      The number of batches that can be submitted in a day = 15000

      Therefore,

      The maximum number of batches that can be submitted in one run of the Pipeline = 15000/Z.

    Example:

    Suppose you have two Objects containing 55800 and 25000 Events respectively, and the ingestion frequency is 12 hours. Then,

    The number of batches created for Object 1 (X1) = 55800/10000 = 5.58.

    Therefore, six batches are created; five with 10000 Events each and the sixth with 5800 Events.

    The number of batches created for Object 2 (X2) = 25000/10000 = 2.5.

    Therefore, three batches are created; two with 10000 Events each and the third with 5000 Events.

    The total number of batches created across all Objects in the Pipeline (Y) = X1 + X2 = 6 + 3 = 9.

    These nine batches are submitted in one run of the Pipeline.

    Now, as the Ingestion frequency is 12 hours,

    The total number of Pipeline runs in 24 hours (Z) = 24/12 = 2.

    And,

    The maximum number of batches that can be submitted in one run of the Pipeline = 15000/2 = 7500.

    Here, against the available limit of 7500 batches per Pipeline run, only 9 batches are being submitted.

    Therefore, as long as Z x Y <= 15000, you are within the daily prescribed quota.


Limitations

  • Hevo does not fetch any columns of Compound data type.

  • It is not possible to avoid loading the deleted data. Hevo loads the new, updated, and deleted data from your Salesforce account.


See Also


Revision History

Refer to the following table for the list of key updates made to this page:

Date Release Description of Change
Mar-05-2024 2.21 Updated the ingestion frequency table in the Data Replication section.
Apr-04-2023 NA Updated section, Configuring Salesforce as a Source to update the information about historical sync duration.
Mar-23-2023 2.10 Updated section, Data Replication to add information about Hevo being able to ingest only the columns that you specify for an object.
Dec-14-2022 NA Updated section, Limitations to inform users about Hevo loading deleted data.
Nov-23-2022 2.02 Updated section, Source Considerations to add information about Hevo automatically restarting historical load for calculated fields.
0ct-17-2022 1.99 Updated section, Configuring Salesforce as a Source to add information about the Include New objects in the Pipeline feature.
Oct-13-2022 NA Updated section, Source Considerations to inform users about Hevo not ingesting back-dated data for objects during incremental ingestion.
Oct-10-2022 NA Updated section, Configuring Salesforce as a Source to add information about calculated fields.
Aug-24-2022 NA - Updated section, Data Replication to restructure the content for better understanding and coherence.
- Updated section, Configuring Salesforce as a Source to reflect the latest UI changes.
May-11-2022 NA Added a Source consideration about derived fields not getting picked up during incremental loads and the workaround to ingest the associated Events.
Mar-07-2022 NA - Updated and organized the content in the section, Source Considerations.
- Removed the bulk APIs limitation as REST APIs are also supported now.
Jan-07-2022 1.79 Added information about configurable historical sync duration in the Data Replication section.
Jan-03-2022 1.79 Added information about reverse historical load in the Data Replication section.
Oct-25-2021 NA Added the Pipeline frequency information in the Data Replication section.
Oct-04-2021 1.73 Updated the Source Considerations section with an example of calculating quota usage.
Sep-09-2021 NA Updated the Limitations section to remove the limitations around ingestion of the attachment object. Also removed the limitation around ingestion of REST API objects, as this is now supported by Hevo.
Aug-8-2021 NA Added a note in the Source Considerations section about Hevo deferring data ingestion in Pipelines created with this Source.
Jul-26-2021 NA Added a note in the Overview section about Hevo providing a fully-managed Google BigQuery Destination for Pipelines created with this Source.
Feb-22-2021 1.57 Include the setup guide on the Salesforce Source configuration UI.

Tell us what went wrong