ON THIS PAGE
Hevo can load data from any of your pipelines into a Redshift data warehouse. In this document, we will walk through the steps to add Redshift as a Destination.
- The database user must have CREATE and USAGE schema privileges on the schema in Redshift you wish to connect to.
1. Add Destination
A Destination can either be added while creating a pipeline or by heading to DESTINATIONS tab on the left navigation and clicking on ADD DESTINATION button.
2. Select Destination type
In the right pane, click on Select Destination Type drop-down and select Amazon Redshift.
3. Fill in connection details
- Destination Name: A unique name for this Destination.
- Database Host: Redshift host’s IP address or DNS.
- Database Port: The port on which your Redshift server is listening for connections (default is 5439).
- Database User: A user with a non-administrative role of the Redshift database.
- Database Password: Password of the user.
- Database Name: Name of the Destination database where data will be dumped.
- Database Schema: Name of the Destination database schema (Default = public)
4. Specify additional settings
- Sanitize Table/Column Names?: Enable this option to remove all non-alphanumeric characters, spaces in between the table and column names and replace them with an underscore (_). Read Name Sanitization.
- Connect through SSH: Enable this option to connect to Hevo using an SSH tunnel. Read Connecting Through SSH. Else, whitelist Hevo’s IP addresses provided here.
5. Configure Advance Settings:
- Populate Loaded Timestamp: Enable this option to append the __hevo_loaded_at column to the Destination table to indicate the time when the Event was loaded to the Destination. See Loading Data to a Data Warehouse for more information.
6. Test connection
After filling the details, click on Test Connection button to test connectivity with the Destination Redshift server.
Once the test is successful, save the destination by clicking on Save Destination.