MongoDB Atlas

After you have selected MongoDB Atlas as the Source for creating the Pipeline, provide the connection settings and data replication details listed here in the Configure Your Source page. You can fetch the database settings from your MongoDB Atlas account.


Perform the following steps to configure your MongoDB Atlas Source:

Retrieve Database Settings

Perform the following steps to retrieve your MongoDB Atlas database settings:

  1. Access the MongoDB Atlas console and select the project whose data you want to replicate.

  2. Click Clusters in the left navigation bar.

    MongoDB connection settings

  3. Click CONNECT.

  4. Click Connect with the mongo shell.

  5. Ensure that the mongo shell version is greater than 3.6, and click COPY to copy the connection string.

  6. Copy the database host name and the user name.

    MongoDB Atlas database host address and user name

Set up Permissions to Read MongoDB Atlas Databases

Whether you have selected OpLog or Change Streams as the Pipeline mode, you need to assign the required permissions to the different databases for the Hevo user to be able to read from these, as follows:

  1. In your MongoDB Atlas console, click Database Access, and then, Add New Database User. Database Access tab

  2. Select the Authentication Method as Password.

  3. In the Password Authentication section, provide a username and password for the new user. You can skip this if you are editing an existing user.

  4. In the Database User Privileges drop-down, select Grant Specific Privileges from the Advanced Privileges list.

  5. Grant the following access privileges. Click + Add Another Role to set up access for multiple databases.

    • read access to the local database.

    • read access to the databases you want to replicate.

    • readAnyDatabase access to the admin database if you want to load all databases.

      Setting up permissions to access databases

Whitelist Hevo’s IP Addresses

Access your MongoDB Atlas console and:

  1. Click Network Access.

  2. In the IP Whitelist tab, click Add IP Address.

    Whitelisting HEVO IP Addresses in MongoDB Atlas

  3. Provide the Hevo IP address you want to whitelist.

    Adding Hevo's IP addresses

    Note: To provide all IPs with access, enter

  4. Click Confirm.

Configure MongoDB Atlas Connection Settings

Perform the following steps to configure MongoDB Atlas as a Source in Hevo:

Configure Atlas as a Source

  • Pipeline Name: A unique name for the Pipeline.

  • Provide the following database settings fetched from the MongoDB Atlas account:

    • Database Host: The MongoDB DNS name fetched from your MongoDB Atlas account. No additional information needs to be provided in this field irrespective of your MongoDB configuration.

    • Database User: The authenticated user that can read the collections in your database. Read Setting up Permissions to Read MongoDB Atlas Databases

    Note: It is recommended that only read-only permissions be provided to the user.

  • Database Password: The password for the database user.

  • Connection Settings:

    • Connect through SSH: Enable this toggle option to connect to Hevo using an SSH tunnel, instead of directly connecting your MongoDB Atlas database host to Hevo. This provides an additional level of security by not exposing your MongoDB setup to the public. Read Connecting Through SSH.

    If this option is disabled, you must whitelist Hevo’s IP addresses to allow Hevo to connect to your MongoDB host.

  • Advanced Settings:

    • Load All Databases: If enabled, Hevo fetches data from all your databases on the selected host. If disabled, provide the Database Name to fetch data from.

      Note: You can separate multiple database names with a comma.

    • Merge Collections: If enabled, collections with the same name across different databases are merged into a single Destination table. If disabled, separate tables are created, prefixed with the respective database name. See Example - Merge Collections Feature.

    • Load Historical Data: If disabled, Hevo loads data written in your database after the time of creation of the Pipeline. If enabled, the entire table data is fetched during the first run of the Pipeline.

See Also

Last updated on 09 Mar 2021