For creating Pipelines using this Source, Hevo provides you a fully managed BigQuery data warehouse as a possible Destination. This option remains available till the time you set up your first BigQuery Destination irrespective of any other Destinations that you may have. With the managed warehouse, you are only charged the cost that Hevo incurs for your project in Google BigQuery. The invoice is generated at the end of each month and payment is recovered as per the payment instrument you have set up. You can now create your Pipeline and directly start analyzing your Source data. Read Hevo Managed Google BigQuery.
Pardot uses the concept of domains and business units to host the data for all your projects.
-
Pardot Domain: Pardot provides you the following domains for hosting your different Pardot environments:
-
pi.demo.pardot.com: This domain hosts demo, developer org, and sandbox environments.
-
pi.pardot.com: This domain hosts training and production environments.
-
Pardot Business Unit: Business Units are separate databases within a Pardot account that can hold data for your prospects, campaigns, and assets. Business units allow you to partition the data by regions, products, or services. For example, if your website is hosted on a different region, you can set up one business unit for each region.
You can replicate data from any Pardot domain and business unit to the Destination system using Hevo Pipelines. You must create one Pipeline for each business unit you want to replicate data from.
-
Click PIPELINES in the Asset Palette.
-
Click + CREATE in the Pipelines List View.
-
In the Select Source Type page, select Pardot.
-
In the Configure your Salesforce Account linked to Pardot page, click + ADD SALESFORCE ACCOUNT LINKED TO PARDOT.

-
Select the environment associated with your Salesforce login and click CONTINUE.

-
Log in to your Salesforce account.
-
Click Allow to authorize Hevo to access your Pardot environment.

You are then redirected to the Configure your Pardot Source page.
-
In the Configure your Pardot Source page, specify the following:

-
Pipeline Name: A unique name for your Pipeline.
-
Pardot Domain Name: Select the domain name where your Pardot data is hosted.
-
Pardot Business Unit ID: A unique identifier for your Pardot business unit whose data you want to replicate. It is 18 characters long and begins with the value 0Uv
.
-
Historical Sync Duration: The duration for which the past data must be ingested. Historical sync duration is All Available Data by default.
-
Click TEST & CONTINUE.
-
Proceed to configuring the data ingestion and setting up the Destination.
Enabling Single Sign-on for the Salesforce User
You must enable Salesforce Single Sign-On (SSO) in Pardot to allow Hevo to access your Pardot data. To do this:
-
Log in to your Salesforce account linked to Pardot.
-
Click the menu icon in the top-left, and then, search and click Pardot to launch the Pardot Lightning App.

Note: If you do not have Pardot Lightning App enabled, then read this.
-
Click the Pardot Settings tab.
-
In the left navigation pane, under System Emails, click the User Management drop-down, and then click Users.

-
Select the user for whom you want to enable SSO.
-
Click the gear icon next to the selected user and in the drop-down, click Edit.

-
Scroll down and select the Salesforce CRM Username of the user for whom you want to enable SSO from the CRM Username drop-down.

Note: A user’s username and email address need not be the same but the username must be in the form of an email address.
-
Click Save user.
Data Replication
Default Pipeline Frequency |
Minimum Pipeline Frequency |
Maximum Pipeline Frequency |
Custom Frequency Range (Hrs) |
1 Hr |
1 Hr |
24 Hrs |
1-48 |
Note: The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75.
-
Historical Data: Once you create the Pipeline, all data associated with the Pardot account is ingested by Hevo and loaded into your Destination.
-
Incremental Data: Once the historical load is completed, each subsequent run of the Pipeline:
Source Consideration
- The earliest date from which the data is fetched is 1st Jan, 2007.
Schema and Primary Keys
Hevo uses the following schema to upload the records in the Destination:
Data Model
Hevo uses the following data model to ingest data from your Pardot account:
Object |
Description |
campaign |
Contains details of the Salesforce campaigns linked with Pardot. |
email_click |
Records the event details when an email is opened. |
list |
Contains list of emails sent. |
list_membership |
Contains information on all the prospects within a list. |
opportunity |
Tracks pending or actual revenue from a sale. |
opportunity_prospect |
Is a custom table generated by Hevo to maintain data sanity between the opportunity & the prospect. |
prospect |
Contains a list of prospect and their details. Prospects are visitors with an associated email address in Pardot. |
prospect_account |
Contains a list of companies and their details. A prospect account allows you to group prospects that work for the same company. |
tag |
Contains a list of tags. Tags are taxonomy defined to help in reporting. |
tag_object |
Contains information about the location where tags are assigned. |
user |
Contains a list of Pardot users. |
visit |
Contains details of each visit made by a visitor. |
visitor |
Contains a list of visitors. Visitors are anonymous users who visit your website. |
visitor_activity |
Contains visitor’s activity data. |
visitor_page_view |
Contains details of pages visited by a visitor during each visit. |
Refer to the following table for the list of key updates made to this page:
Date |
Release |
Description of Change |
Oct-25-2021 |
NA |
Added the Pipeline frequency information in the Data Replication section. |
Jul-26-2021 |
NA |
Added a note in the Overview section about Hevo offering a fully-managed Google BigQuery data warehouse Destination for Pipelines created with this Source. |