Hevo can load data from any of your pipelines into a BigQuery data warehouse. In this document, we will walk through the steps to add BigQuery as a destination.
1. Add Destination
A destination can either be added while creating a pipeline or by heading to DESTINATIONS tab on the left navigation and clicking on ADD DESTINATION button.
2. Select destination type
In the right pane, click on Select Destination Type drop-down and select Google Bigquery.
3. Authorise Hevo to read and write data to your BigQuery datasets.
Click on Authorise button to give Hevo permissions to read and write data to your BigQuery datasets.
3. Fill in connection details
- Destination Name: A unique name for this destination.
- Project ID: Project ID of your BigQuery instance
- Dataset Name: Name of the dataset in which to want to sync your data.
- GCS Bucket: Bucket where files will be staged before being uploaded to BigQuery.
4. Test connection
After filling the details, click on Test Connection button to test connectivity with the BigQuery instance.
5. Save destination
Once the test is successful, save the destination by clicking on Save Destination.