Amazon Redshift

ON THIS PAGE

Hevo can load data from any of your pipelines into a Redshift data warehouse. In this document, we will walk through the steps to add Redshift as a destination.

Prerequisites

  • The database user must have CREATE and USAGE schema privileges on the schema in Redshift you wish to connect to.

Steps

1. Add Destination

A destination can either be added while creating a pipeline or by heading to DESTINATIONS tab on the left navigation and clicking on ADD DESTINATION button.

2. Select destination type

In the right pane, click on Select Destination Type drop-down and select Amazon Redshift.

3. Fill in connection details

  • Destination Name: A unique name for this destination.
  • Database Host: Redshift host’s IP address or DNS.
  • Database Port: The port on which your Redshift server is listening for connections (default is 5439).
  • Database User: A user with a non-administrative role of the Redshift database.
  • Database Password: Password of the user.
  • Database Name: Name of the destination database where data will be dumped.
  • Database Schema: Name of the destination database schema (Default = public)
  • If you want to connect to Hevo using an SSH tunnel, check How to Connect through SSH. Else, you will have to whitelist Hevo’s IP addresses as listed here.

4. Test connection

After filling the details, click on Test Connection button to test connectivity with the destination redshift server.

5. Save destination

Once the test is successful, save the destination by clicking on Save Destination.