Amazon S3

ON THIS PAGE

Hevo can load data from any of your pipelines into an S3 location. In this document, we will look at the steps to add S3 as a destination.

Prerequisites

The AWS Key you provide below must have the PutObject privilege on both the S3 bucket and the location prefix that you will be adding below.

Steps

1. Add Destination

A destination can either be added while creating a pipeline or by heading to DESTINATIONS tab on the left navigation and clicking on ADD DESTINATION button.

2. Select destination type

In the right pane, click on Select Destination Type drop-down and select S3.

3. Fill in connection details

  • Destination Name: A unique name for this destination.
  • Access Key ID: AWS access key.
  • Secret Access Key: The Secret key corresponding to the access key.
  • Bucket Name: The S3 bucket name you want to put your data in.
  • Prefix: A location prefix where you want your data to be in.
  • Bucket Region: The AWS region where your bucket is located.
  • File Format: The data format in which you want to write your data in. The options are as follows:
    • JSON: Hevo will write each event as a JSON object per line. For more information please go through http://jsonlines.org/
    • ORC: Hevo will write data in ORC file format so that you can plug in your Hadoop or Hive workload. For more information please go through https://orc.apache.org/
  • Should files be GZipped?: If this option is enabled Hevo will GZip the files prior to writing it to S3.

4. Test connection

After filling the details, click on the Test Connection button to test connectivity and permission with your S3 destination.

5. Save destination

Once the test is successful, save the destination by clicking on Save Destination.

Note

Since S3 is a file-based destination, Hevo gives you an option to partition your data. To understand how data partitioning works for S3 please refer to this document.