ON THIS PAGE
Marketo is a marketing automation platform used by B2B and B2C companies to manage and deliver personalized multi-channel programs and campaigns to prospects and customers. Marketo enables companies to curate their raw user data and create programs and focussed campaigns for different marketing activities, from lead generation to marketing ROI measurement, across multiple channels.
With the help of Pipelines in Hevo, you can synchronize Marketo with a database or data warehouse Destination to always have access to the latest data, which you can feed into your enterprise BI solution for custom reporting and analysis. Hevo Pipelines use Marketo’s bulk (preferred) and REST APIs to fetch both historical and changed data, which you can replicate to the Destination after performing any necessary transformations on it.
Note: For Pipelines created with this Source, Hevo provides you a fully-managed BigQuery data warehouse Destination if you do not already have one set up. You are only charged the cost that Hevo incurs for your project in Google BigQuery. The invoice is generated at the end of each month and payment is recovered as per the payment instrument you have set up. You can now create your Pipeline and directly start analyzing your Source data. Read Hevo Managed Google BigQuery.
Configuring Marketo as a Source
To configure Marketo as a Source:
Obtain authenticated access credentials for your Marketo instance.
Note: While creating a new role, you must either individually select all
Read-Onlyprivileges or provide all Access-API privileges.
Click PIPELINES in the Asset Palette.
Click + CREATE in the Pipelines List View.
In the Select Source Type page, select Marketo.
In the Configure your Marketo Source page, specify the following:
Pipeline Name: A unique name for your Pipeline.
Client ID - Available at the newly created service.
Client Secret - Available at the newly created service.
Endpoint - The base URL used to make all the API calls.
Identity Endpoint - The endpoint used to retrieve access tokens using the Client ID and Client Secret.
Click TEST & CONTINUE.
Proceed to configuring the data ingestion and setting up the Destination.
Note: If IP restriction is enabled in your Marketo account, you must either whitelist Hevo IPs or disable IP restriction to allow Hevo to make API calls.
Marketo API Limits
Marketo imposes strict limits on the API calls you can make within a given time frame to retrieve the data. Some of these include:
- Rate Limit: Maximum of 100 API calls per 20 seconds per instance.
- Concurrency Limit: Maximum of 10 concurrent API calls.
- Daily Quota: Up to 50,000 API calls per day with a max export of 500 MB for bulk jobs for paid subscription (quota resets daily at 12:00 midnight CST).
Read more about Marketo API limits.
Hevo Pipelines overcome these limitations by using Bulk APIs to fetch the data.
Data Ingestion using Bulk APIs
The Hevo connector uses Bulk APIs by default for all Marketo objects that allow this, namely, Program members, Activities and Leads. As a result, Hevo can minimize the number of API calls, while maximising the number of Events fetched per API call, and thereby, help you to remain within the imposed limits to the extent possible. This becomes specially useful while retrieving historical data.
Bulk APIs in Marketo uses the same permissions as the REST APIs, therefore, the job or API type that is running is transparent to you except for the API endpoint.
Compared to a REST API, for a bulk extract:
The Hevo connector submits the job for the data you need, with the required metadata, to Marketo.
Marketo queues and runs the job.
Hevo queries for the status intermittently and when the job completes, Hevo makes one single call to fetch the data, extract the input stream, and process the records in order.
See Appendix 1 - Destination Tables for the list of Marketo objects that allow bulk operations.
Hevo ingests the data from Marketo as follows:
Historical Data: The historical data is ingested starting from 12.00 a.m. UTC (midnight) of the current day, for the past one year, a month at a time, in reverse chronology.
Incremental Data: The incremental data is ingested at the scheduled interval of the Pipeline to fetch new Events.
Data Refresh: The data for the past three months is ingested on every run to ensure that your data is up to date and any data freshness issues are overcome.
Refer to the table below for the type of data that is fetched for each object.
|Data Type||Object Names||Schedule||Additional Information|
|Historical||Activities||During the first run of the Pipeline|
|Incremental||Activities, Activity Types, Campaigns, Leads, Programs||During each run of the Pipeline||For the Campaigns object, data for the past 12 months is fetched on the first run.|
|Refresh Data||Leads||Every 24 hours post-completion of ingestion||Ensures that leads, opportunity, opportunity roles, and salespersons data is up to date.|
Note: The time taken for the Historical data load is determined by the amount of data and processing time for the ingestion from Marketo.
Schema and Primary Keys
Hevo uses the following schema to upload the records in the Destination:
The following is the list of tables (objects) and their primary keys that are created at the Destination when you run the Pipeline.
If you have selected AutoMapping, Hevo creates the tables in the Destination automatically. Else, you must manually create and map the tables. See sample image here:
Note: The table names are written in small case, except for the Snowflake data warehouse tables which are written in uppercase.
|Table||Primary Key||Parent Object||Bulk API|
(Custom activity types as well as Marketo-provided activity types are loaded into the same table.)
|lead_activities||marketoGUID||NA||Bulk activity extract|
|leads||leadId||NA||Bulk lead extract|
|program_members||ID||program||Bulk program member extract|
Linked objects: Wherever there is a dependency between objects, the job is created for only the higher-level object. The same job fetches the data for the linked object also. However, you can see separate tables in Schema Mapper for each such linked object. For example, when you fetch Campaigns data, Smart List data is automatically fetched. So, while you see the Smart List table in Schema Mapper, there is no job created by this name.