Hevo can load data from any of your pipelines into a Redshift data warehouse. In this document, we will walk through the steps to add Redshift as a destination.
- The database user must have CREATE and USAGE schema privileges.
1. Add Destination
A destination can either be added while creating a pipeline or by heading to DESTINATIONS tab on the left navigation and clicking on ADD DESTINATION button.
2. Select destination type
In the right pane, click on Select Destination Type drop-down and select Amazon Redshift.
3. Fill in connection details
- Destination Name: A unique name for this destination.
- Database Host: Redshift host's IP address or DNS.
- Database Port: The port on which your Redshift server is listening for connections (default is 5439).
- Database User: A user with a non-administrative role of the Redshift database.
- Database Password: Password of the user.
- Database Name: Name of the destination database where data will be dumped.
- Database Schema: Name of the destination database schema (Default = public)
- If you want Hevo to connect with destination using an SSH server, check How to Connect through SSH. Else, you will have to whitelist Hevo's IP addresses for the database port which will be highlighted on the screen, For eg. in this case you will have to whitelist following IP addresses:
4. Test connection
After filling the details, click on Test Connection button to test connectivity with the destination redshift server.
5. Save destination
Once the test is successful, save the destination by clicking on Save Destination.