Frequently Asked Questions

Why do I keep seeing AtlasAuthenticationError: 401: Unauthorized. errors while trying to use astrolabe?

Applications can only be granted programmatic access to MongoDB Atlas using an API key. If you are seeing 401: Unauthorized error codes from MongoDB Atlas, it means that you have either not provided an API key, or that the API key that you have provided is has expired. Please see the MongoDB Atlas API documentation for instructions on how to create programmatic API keys.

You also need a set of API keys with Atlas global operator permissions, referred to as admin credentials.

astrolabe can be configured to use API keys in one of 2 ways:

  • Using the -u/–username and -p/–password command options:

    $ astrolabe -u <publicKey> -p <privateKey> --atlas-admin-api-username <publicKey> --atlas-admin-api-password <privateKey> check-connection
    
  • Using the ATLAS_API_USERNAME, ATLAS_API_PASSWORD, ATLAS_ADMIN_API_USERNAME, ATLAS_ADMIN_API_PASSWORD environment variables:

    $ ATLAS_API_USERNAME=<publicKey> ATLAS_API_PASSWORD=<privateKey> ATLAS_ADMIN_API_USERNAME=<publicKey> ATLAS_ADMIN_API_PASSWORD=<privateKey> astrolabe check-connection
    

Why should we use the custom ubuntu1804-drivers-atlas-testing distro?

MongoDB Atlas restricts the number of clusters in an Atlas Project to 25. Since this project runs the entire build matrix in its in evergreen configuration under a single Atlas project, it often ends up running into this limitation which causes hard-to-diagnose test failures (see #45, #46, and #48). To mitigate this issue, we need to limit the number of concurrent builds in the drivers-atlas-testing Evergreen project to less than 25. Evergreen does not currently have a way to enforce such a limit, so instead we have created this custom distro which is limited to 25 hosts. While not foolproof, this workaround helps avoid the aforementioned failures in most usage scenarios.

What is the purpose of the ASTROLABE_EXECUTOR_STARTUP_TIME environment variable?

This test framework’s architecture (see Architecture Overview) delegates the responsibilities of invoking the workload executor as well as starting the Atlas planned maintenance to astrolabe. Since the workload executor is run in a subprocess, there is no straightforward way for astrolabe to ascertain whether a workload executor is in the process of initialization or if it is, in fact, already running operations against the test cluster. This limitation, when combined with a workload executor that runs particularly slowly can result in failures, e.g.:

  • The maintenance completes and astrolabe terminates the workload executor before it can run any operations.

  • The workload executor only starts running operations when cluster maintenance is already underway.

To avoid this situation, users can set the ASTROLABE_EXECUTOR_STARTUP_TIME environment variable to a value greater than the number of seconds it takes their workload executor to start. When this value is set, astrolabe will wait for ASTROLABE_EXECUTOR_STARTUP_TIME seconds before implementing the maintenance plan thereby avoiding the aforementioned issues.