An Error Occurred While Querying Your Mongodb Deployment Please Try Again in a Few Minutes

Guide to Migrating to Atlas

  • Migration Prerequisites
  • Pre-Migration Setup
    • A. Create your Atlas organization and project(s)
    • B. Connect your Atlas organization to the source mLab account
    • C. Configure payment method (non required for complimentary databases)
  • Migrating a Specific Deployment
    • East. Initiate the migration process for a specific deployment
    • F. Complete the tasks in the migration wizard
  • Post-Migration Atlas Configuration Review
  • Reference
    • Sizing the Target Atlas Cluster
    • Operational Limitations of the Atlas M0, M2, and M5 tiers
    • Migration Process
      • mongodump/mongorestore procedure
      • Live Migration process

The mLab service has been fully close downwardly since January 12, 2021. All mLab users have already migrated to MongoDB Atlas. As such, this documentation is no longer applicable.

This is a general guide to migrating from mLab to Atlas. You will review migration prerequisites, create an Atlas organization, and and so connect information technology to your mLab account in lodge to access the mLab->Atlas migration tool.

We also accept a Migration FAQ as well as demo videos and an abbreviated guide to migrating a Sandbox Heroku add-on.

Migration Prerequisites

Ensure minimum required versions:

  • Driver version
    • Your application should be using a commuter that'due south compatible with at to the lowest degree MongoDB version 3.6 (driver compatibility reference).
      • If y'all are using Parse Server, versions older than two.seven.2 should be upgraded to at least Parse Server 2.8.4.
      • If you are using Meteor, versions older than 1.6.ii should be upgraded to at least Meteor 1.viii.3.
      • If y'all are using Sitecore XP, you lot should ensure that y'all're on at least Sitecore XP eight.2.7 (acquire more).
    • If the target Atlas cluster will be on shared resources (M0, M2, or M5) we recommend that your commuter be uniform with an even higher version, MongoDB version 4.2, although this is non a requirement since three.6-series drivers have been tested against MongoDB four.2.
  • Database version
    • Your mLab deployment must be running on MongoDB version 3.vi. This should already be the instance since MongoDB version 3.4 was de-supported in Jan 2020.

Review fundamental differences betwixt mLab and Atlas:

  • Atlas prices differently than mLab in that clusters, backups, information transfer, and back up are priced separately.
  • The Atlas M0 (free) tier does not support Atlas backups.
  • Upgrades from the Atlas M2 and M5 (shared) tiers to the Atlas M10+ (dedicated) tiers require seven-9 minutes of downtime (no change in connection string).
  • The Atlas M2 and M5 (shared) tiers are running MongoDB 4.2 which removes supports for some commands and methods. Commands similar group and geoNear must be replaced with the $grouping and $geoNear aggregation stages.
  • Atlas does not have a Information API. Y'all must switch to a self-hosted mLab Data API before migrating to Atlas (encounter FAQ).
  • Atlas currently does not support archiving backups to custom S3 buckets but it does have an API for programmatically accessing backups (see FAQ). Note that Atlas backups are in the form of database files and not mongodumps.
  • Atlas does not back up sharing of EBS Snapshots but it does take an API for programmatically accessing backups (meet FAQ).
  • Atlas restricts the ability to employ sure database commands, some of which were allowed on mLab.
  • Atlas supports Analytics Nodes using replica set tags instead of using hidden nodes.
  • Atlas servers always run with requireSSL and simply take TLS/SSL encrypted connections.
  • Atlas supports (and defaults to) the mongodb+srv:// connection string format (run across FAQ)

Ensure that all recurring query patterns are well-indexed and that your deployment is running healthy on mLab:

  • If your deployment is on a Shared or Defended programme visit the "Slow Queries" tab to view and build the indexes recommended by mLab'due south Slow Queries Analyzer. We strongly recommend building indexes for all recurring query patterns. Email support@mlab.com if you have any questions about the recommendations or recommendation notes.
  • After edifice indexes wait six-12 hours to get a fresh "Irksome Queries" report.
  • If all recurring query patterns are well-indexed, and you come across a very large number of slow operations reported on the "Slow Queries" tab, email support@mlab.com for advice.

Consider enabling SSL on your mLab deployment earlier migrating:

  • If the target Atlas cluster will be on dedicated resources (M10 or above) AND if your mLab deployment is non located in one of Atlas' six Alive Migration regions—AWS u.s.a.-east-1 (N. Virginia), AWS u.s.a.-west-ii (Oregon), AWS eu-w-ane (Dublin), AWS european union-fundamental-1 (Frankfurt), AWS european union-w-ii (London), AWS ap-southeast-2 (Sydney)—we recommend enabling SSL on mLab before migrating.
  • If your deployment is hosted in one of the higher up Alive Migration regions, traffic volition stay within the cloud provider's local network.

Immediately before starting to drift a specific mLab deployment to Atlas:

  • Ensure that you've completed the Pre-Migration Setup.

Pre-Migration Setup

A. Create your Atlas organization and project(southward)

If you lot will be using Atlas on behalf of a company, note that you will only need a single Atlas organization. Unlike mLab, MongoDB Atlas provides the ability to accept multiple Organization Owners and to assign fine-grained privileges to users.

  1. Visit https://mongodb.com/cloud/atlas/signup and register for an Atlas account.
    • The email address that you sign upward with will become the username that you use to log in.
  2. Ensure that you are on the Projection view past visiting https://cloud.mongodb.com.
  3. (Optional) Rename the default organization.
    • If you will be using this business relationship on behalf of a company, rename information technology to the name of the company.
    • You lot will be able to change this later so don't worry about what you lot choose.
  4. (Optional) Rename the default project.
    • We recommend creating dissever projects for your various environments (e.grand., production, examination, development). We advise against mixing evolution and production clusters inside the same project since they share security-related resource such as database users and IP whitelists.
    • Common project names are "<your app'south proper name> Production" and "<your app'due south name> Development".
    • You will be able to change this afterward so don't worry about what you choose.
    • You lot'll afterwards be able to manage access to your projection(due south).
  5. (Optional) Create multiple projects within your Atlas organisation.
    • Read nearly what having multiple projects within an organization enables you to practice and think about whether and how you desire your clusters spread beyond multiple projects.
    • Importantly, annotation that Atlas clusters within a given Atlas project share the same database users and whitelisted IP addresses.

B. Connect your Atlas arrangement to the source mLab business relationship

In lodge to utilise the migration tool that was custom-built for migrations from mLab to Atlas, you'll need to create a connection between the source mLab account (the account from which you desire to migrate deployment(s)) to the target Atlas system.

If you have multiple mLab accounts that vest to the same company, note that you can kickoff connect the target Atlas organization to one source mLab account. Then at any bespeak y'all can disconnect and and so connect the aforementioned target Atlas organization to a unlike source mLab account. There are no restrictions to the number of times you tin can disconnect and connect to different source mLab accounts.

Steps:

  1. Log in to the target Atlas organization as an Atlas Organization Possessor.
    • Just the Organization Possessor volition be able to constitute a connectedness to the source mLab business relationship. Notwithstanding, this root-level access is not required for migrating; any Atlas user that is a Project Owner of the target Atlas project can migrate a specific deployment from mLab.
  2. Ensure that the target Atlas arrangement has been selected from the Organizations menu (the drib-down menu in the pinnacle-left corner next to the MongoDB greenish leaf logo).
  3. Click on the Organization Settings icon side by side to the Organizations menu (the gears icon). img-atlas-org-settings.
  4. Click on the light-green "Connect to mLab" button.

  5. Log in to the source mLab account from the aforementioned browser equally the mLab Admin User.
    • Heroku Add-on Users: Review our documentation on Heroku on how to log into your Heroku-provisioned mLab account.
  6. In mLab's UI review the text presented past the "Authorize MongoDB Atlas" form and click "Qualify" to complete the connecting of your mLab account with your Atlas system.

The "mLab Account" link in the left navigation pane should at present exist highlighted, and you lot should run across the "mLab Account" view. This view lists your mLab deployments on the "Deployments" tab and lists your mLab account users on the Account Users tab. This view makes it easier to invite your mLab account users to your Atlas arrangement and to migrate your mLab deployment(due south) to Atlas.

If y'all've navigated away from this "mLab Account" view y'all can return back at any time past navigating to the Organisation Habitation view (click on the green MongoDB leaf icon in the upper-left corner) then clicking on the "mLab Account" link.

C. Configure payment method (not required for free databases)

In club to create a for-pay Atlas cluster (M2 or above), you will need to offset configure a payment method.

mLab's service and MongoDB Atlas' service are completely split up which means that even if you already have a credit card fastened at mLab, y'all'll need to supply it to MongoDB Atlas besides.

Configure Credit Carte du jour or PayPal

This is required for nearly for-pay customers.

Steps:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-downward menu in the top-left corner next to the MongoDB green leaf logo).
  3. From the top navigation select "Billing".
  4. Click the "Edit" push button in the "Payment method" panel.
    • View documentation almost configuring a payment method.

Enter Activation Code

This is but applicative to customers who have executed a MongoDB Social club Form (i.e., entered into a contract with MongoDB).

Steps:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas arrangement has been selected from the Organizations carte (the drop-down carte du jour in the peak-left corner next to the MongoDB green leaf logo).
  3. From the left navigation select "Billing".
  4. Click the "Use Credit" push and enter the activation code that yous received via e-mail.
    • View documentation on applying an activation lawmaking.

D. (Optional) Invite the source mLab account's users to your Atlas system and grant projection access

Afterwards the connection betwixt the source mLab account and the target Atlas organisation has been established y'all will be able to see a list of the account users which be on the source mLab account and have the opportunity to invite a given mLab user'south email accost to your Atlas organization.

Unlike with mLab, a given email address (and username) on Atlas can be associated with many Atlas organizations.

Steps:

  1. Navigate to the "mLab Account" view.
    • From the left navigation click "mLab Account".
  2. Select the "Business relationship Users" tab.

  3. Open the "Invite mLab User to Arrangement" dialog.
    • Locate the user that you want to invite to the target Atlas organization.
    • Click the ellipses (…) button for that user.
    • Click "Invite User".
  4. In the dialog, click "Send Invitation".
    • The dialog will close and Atlas will ship an e-mail to that user's e-mail address inviting them to the Atlas organization. The subject of this electronic mail will start with "Invitation to MongoDB Cloud".
    • If this email accost is new to Atlas (i.e., isn't associated with an existing Atlas user), when the invitation is accepted, this e-mail address will become the username for the new Atlas user. This new Atlas user will only be able to log into Atlas and not mLab.
    • Otherwise when the invitation is accepted, this user will at present meet your Atlas system associated with their profile.
  5. Ensure that the correct target project is selected.

  6. Grant access to the target project.
    • View Atlas documentation on managing project access and project roles.
    • The Atlas user that initiates a migration into a target Atlas cluster needs to take "Project Possessor" access.

Migrating a Specific Deployment

Later on reviewing the migration prerequisites detailed above and completing the pre-migration setup steps, perform the following steps to migrate a specific mLab deployment.

East. Initiate the migration process for a specific deployment

The steps in the migration magician are different depending on whether y'all are migrating to the Atlas shared tier (M0, M2, or M5) or the Atlas dedicated tier (M10 or higher up). This is because in that location are two different migration processes.

For example, you will only see the "Test Migration" step if you are migrating to an Atlas shared-tier cluster (M0, M2, or M5) which uses a mongodump/mongorestore procedure. All the same, you'll nonetheless exist able to test the Alive Migration process when yous migrate to an Atlas defended-tier cluster (M10 or above).

Steps:

  1. Ensure that you are on the "mLab Account" view and seeing a list of your mLab deployment(s). If you are not on this view:
    • Ensure that you are logged in to the target Atlas arrangement.
    • Ensure that the target Atlas organization has been selected from the Organizations bill of fare (the drop-down carte du jour in the superlative-left corner next to the MongoDB greenish leafage logo).
    • Navigate to the System Dwelling view (click on the green MongoDB foliage icon in the upper-left corner).
    • From the left navigation select "mLab Account".
  2. Locate the mLab Sandbox deployment that you want to drift to Atlas.
    • Click the ellipses (…) push button for that deployment.
    • Click "Configure Migration" to open the migration wizard for the given deployment.

F. Consummate the tasks in the migration magician

Migration wizard tasks include:

  • Setting the target Atlas project
  • Configuring database users (read details)
  • Configuring an IP Whitelist (read details)
  • Setting the target Atlas cluster (read about sizing the Atlas cluster)
    • Select the "Create nearly equivalent new cluster" choice in the driblet-down menu in order to take the migration tool automatically create the most equivalent (or larger) Atlas cluster.
  • Testing connectivity from all applications (read details)
  • Migrating your information and indexes (read details)
  • Instructions on how to delete the source mLab deployment from mLab'southward UI (read details)

Below is an case of what the migration sorcerer looks similar. Annotation that the order and tasks volition vary depending on the characteristics of the mLab deployment being migrated.

img-atlas-migration-checklist

Postal service-Migration Atlas Configuration Review

G. Select a support plan for your Atlas organization

By default Atlas organizations include merely a Basic Support plan which does not include database support or a response time SLA. Dissimilar on mLab, in order to get back up for your database you demand to buy an Atlas support plan separately. You will not be able to open a ticket with Atlas Support unless you have a support programme.

When you migrate from mLab to whatever for-pay Atlas cluster, yous volition be presented with the option to select a support plan. The price of Atlas support plans are determined by monthly usage costs, which are the sum of cluster (VM/deejay), data transfer, and backup costs.

Our intent is that your all-in Atlas costs practise not exceed what you're paying at mLab. Every bit such, when yous drift your source mLab deployment to Atlas, the migration tool will resent you with an option to select a support plan.

As part of the migration process many customers will be offered an Atlas support plan at a significant discount. If yous select a support plan through this process note that as long as you stay with this support program the pricing (in terms of a % uplift with a minimum), volition never change. However, if you modify your support plan or change the way you pay Atlas (eastward.g. enter into an annual contract) it would potentially be subject field to change.

Developer Plan: Atlas Developer

If you are in evolution or are running a non-critical application, the Atlas Developer plan is a smashing pick. This plan has been designed to provide in-depth technical support simply with slower response times.

Premium Programme: Atlas Pro

If your application(southward) have demanding uptime and/or functioning requirements, nosotros highly recommend the Atlas Pro plan. This plan has been designed to provide high-touch, in-depth technical support for advanced bug such equally performance tuning, as well as rapid response times for emergencies, 24x7.

img-atlas-support-plans

If you have questions please email support@mlab.com for help.

H. Customize the fill-in policy for your Atlas cluster (M10+)

Nosotros recommend reviewing the backup schedule and retentiveness policy for your target Atlas cluster.

Be enlightened that:

  • By default Atlas clusters are configured to retain 31 backups (combination of hourly, weekly, monthly, and annual snapshots) whereas by default mLab clusters are configured to retain just 8 backups. The number of retained backups tin can significantly impact pricing, especially on Azure which doesn't yet back up incremental disk snapshots (where a new snapshot saves only the data that changes after your virtually contempo snapshot).
  • Continous Deject Fill-in (previously known as Point-in-Time Restore) is a feature that mLab did not have and can significantly affect pricing.

View Atlas documentation on Snapshot Scheduling and Retention Policy.

I. Exam failover and ensure resilience (M10+)

Atlas performs maintenance periodically (more than oft than mLab). This maintenance can happen at any time unless you've configured a maintenance window. To ensure that maintenance on Atlas is seamless for your application(s):

  1. Configure your awarding(s) to employ the exact connexion string published in Atlas' UI. In item retryWrites=true volition provide your application with more resilience when replica set failovers occur.
  2. Examination failover on Atlas.
    • Meet Atlas' tutorial on testing failover.
  3. (Optional) Configure a maintenance window for your Atlas project.
    • See the "Set Preferred Cluster Maintenance Start Time" section of configuring Atlas projection settings.

One time you lot've configured a preferred cluster maintenance start time, any users with the Project Owner office will receive an electronic mail notification 72 hours before the scheduled maintenance. At that point you accept the option to brainstorm the maintenance immediately or defer the maintenance for one calendar week. You lot can defer a single projection maintenance effect up to ii times.

All nodes in the Atlas cluster could be restarted over a brusque time window during maintenance. Also some urgent maintenance activities (e.grand., urgent security patches) will non wait for your chosen window, and Atlas volition start those activities when needed.

Reference

Configuring Database Users

Dissimilar with mLab, different Atlas clusters can share the same database users (and network configuration). Specifically, be aware that Atlas clusters within a given Atlas project share the same database users (and whitelisted IP addresses).

To access the Atlas cluster, you must cosign using a MongoDB (database) user that has access to the desired database(south) on your Atlas cluster. You tin configure database users by selecting "Database Access" in the left navigation for your project. On Atlas, database users are separate from Atlas users just as on mLab, database users are separate from mLab users.

View Atlas documentation on configuring MongoDB users.

Database users for Atlas clusters cannot be managed via database commands sent to the database direct (due east.g., via the mongo shell or a MongoDB driver). Instead, database users are managed indirectly via the Atlas cloud service, either via the Atlas management UI or via the Atlas management API.

Database user configuration changes will take upwards to a couple minutes to complete on Atlas shared-tier clusters (M0/M2/M5).

Differences in Atlas database user privileges

mLab Office/Privileges Mapped Atlas Privileges Description of Differences
root Atlas admin mLab Dedicated plan deployments back up the root role. The Atlas privilege that is most equivalent is Atlas admin, but be aware that this privilege does not have all the permissions that the root role has. In particular, see the commands that Atlas does not support on its defended-tier clusters (M10 and above).
dbOwner@mydb dbAdmin@mydb, readWrite@mydb mLab deployments back up the dbOwner role which combines the privileges granted by the readWrite, dbAdmin, and userAdmin roles. The set of Atlas privileges that is most equivalent does non include the userAdmin role.
dbOwner@mydb readWriteAnyDatabase The default mapping that the migration presents is "dbAdmin@mydb, readWrite@mydb" (encounter row higher up). Notwithstanding, y'all may choose to instead map this to the readWriteAnyDatabase part which allows read and write operations on all databases in the Atlas cluster (except for "local" and "config") and provides the ability to run the "listDatabases" command.
readOplog read@local mLab's custom "readOplog" role allows read operations on the "oplog.rs" and "system.replset" collections in the "local" database only. This difference should be transparent to your awarding.
read@mydb readAnyDatabase The default mapping that the migration tool presents is "read@mydb" which is an verbal match. Withal, you may choose to instead map this to the readAnyDatabase role which allows read operations on all databases in the Atlas cluster (except for "local" and "config").

Import limitations

The migration tool makes information technology very easy to import the database users(s) that exist on the source mLab deployment. However, there are some situations where you'll demand to manually configure database users.

If the target Atlas project already has database user(s) configured, you will not be able to utilise the migration tool's database user importer.

Conflicting Username

Database usernames must be unique across an Atlas projection. As such if your source mLab deployment has ii database users with the aforementioned name (e.chiliad., "myuser" with the "dbOwner@db1" privilege and "myuser" with the "dbOwner@db2" privilege), the database user importer will not be able to import them.

Couple of options in Atlas for this case:

  • Configure 2 database users - (a) "myuser-db1" with the "dbOwner@db1" privilege and (b) "myuser-db2" with the "dbOwner@db2" privilege.
  • Configure the "myuser" database user with the "readAnyDatabase" privilege.

Unsupported on Atlas

Atlas provides a curated listing of database user privileges. These privileges provide access to a subset of MongoDB commands only and practice not support all MongoDB commands.

Configuring the IP Whitelist

Different with mLab, different Atlas clusters tin share the same network configuration (and database users). Specifically, be enlightened that Atlas clusters inside a given Atlas project share the same whitelisted IP addresses (and database users).

To access your target Atlas cluster, you lot'll need to ensure that any necessary whitelist entries accept been created.

Atlas is different than mLab in that yous must configure IP whitelist entries in order to connect to an Atlas cluster. To access the Atlas cluster, you lot must connect from an IP address on the Atlas project'south IP whitelist. You can configure IP whitelist entries by selecting "Network Admission" from the left navigation for your Atlas project.

Note that mLab's Sandbox and Shared plan deployments are always accessible by all IP addresses. To match the firewall settings of your mLab Sandbox or Shared programme deployment you can whitelist all IP addresses (0.0.0.0/0) on your Atlas cluster. However, we recommend whitelisting only the addresses that crave access. To friction match the firewall settings of your mLab Dedicated plan deployment on Atlas yous can review your electric current mLab firewall settings on the "Networking" tab in mLab'southward UI.

If y'all're connecting to MongoDB Atlas from a Heroku app, information technology's most likely that you demand to whitelist 0.0.0.0/0 (the range of all IP addresses) unless your app is in Heroku Private Spaces. Heroku IP addresses are, in general, highly dynamic. As such most mLab and Atlas-hosted deployments used past Heroku apps allow all IP addresses.

Sizing the Target Atlas Cluster

The migration tool detailed in this guide will allow yous to automatically create the most equivalent Atlas cluster and provide estimated pricing. As such we highly recommend letting the migration tool build the target Atlas cluster instead of creating it yourself. You tin can do and so by selecting "Create almost equivalent new cluster" from the driblet-downwardly menu when you lot are at the "Target Cluster" pace.

The Atlas M0, M2, and M5 tiers run on shared resources (VM/disk) while the Atlas M10 tiers and above run on dedicated resources.

mLab Plan Suggested Atlas Tier
Sandbox M0 (free) or M2
Shared M2, M5, or M10
Dedicated M10 and in a higher place

mLab Sandbox

If your deployment is running on a mLab Sandbox (free) plan, you may migrate to the Atlas M0 (gratuitous tier).

However note that:

  • If you need automated backups, you should migrate your mLab Sandbox database to the Atlas M2 or higher up.
  • Atlas only supports one M0 per project. Y'all tin can create multiple Atlas projects if you need multiple gratis-tier Atlas M0 clusters.
  • The Atlas M0, M2, and M5 tiers have operational limitations that you should be aware of.
  • To drift to the Atlas M10 tier please see this FAQ on migrating from a free mLab Sandbox to an Atlas M10.

mLab Shared

If your deployment is currently running on an mLab Shared programme, the tier that nosotros suggest on Atlas depends on the size of your database.

mLab Size 1 Suggested Atlas Tier
Less than 1.6 GB M2 Review warnings beneath
Between 1.6 GB and 4 GB M5 Review warnings below
More than 4 GB M10 Ensure CPU usage will not exist too high

Atlas M2/M5 Warnings:

  • Upgrades from the Atlas M2 and M5 tiers will require 7-10 min. of downtime. If this is of concern to you, we recommend migrating to the Atlas M10 plan or above where upgrades and downgrades are seamless.
  • The Atlas M2 and M5 tiers take operational limitations that you should be aware of.

Atlas M10 Alarm:

The Atlas M10 tier (which is like to mLab's M1 tier) has just 1 CPU cadre.

As such before migrating from an mLab Shared programme to the Atlas M10 make sure that yous visit the mLab "Slow Queries" tab to view and build the indexes recommended by mLab's Slow Queries Analyzer. If the "Slow Queries" tab continues to evidence a high rate of wearisome operations, delight e-mail back up@mlab.com for aid earlier attempting to migrate to Atlas.

mLab Dedicated

If your deployment is currently running on an mLab Dedicated plan, the following table shows the equivalent Atlas tier.

Note that Atlas sets limits for concurrent incoming connections based on instance size.

mLab Plan Equivalent Atlas Tier
M1 Standard or High Storage M10
M2 Standard or High Storage M20
M3 Standard or High Storage M30
M4 Standard or High Storage M40 (General class)
M5 Standard or Loftier Storage M50 (General class)
M6 Standard or High Storage M60 (Full general form)
M7 Standard or High Storage M80 (Low-CPU)
M8 Standard or High Storage M200 (Depression-CPU)
M3 High Operation (legacy) M40 (Local NVMe SSD)
M4 High Performance (legacy) M40 (Local NVMe SSD)
M5 High Performance M50 (Local NVMe SSD)
M6 High Performance M60 (Local NVMe SSD)
M7 Loftier Performance M80 (Local NVMe SSD)

Choosing the Atlas disk type and size (M10 and to a higher place)

AWS

mLab'southward Defended Standard and Loftier Storage plans employ AWS's General Purpose SSD (gp2) EBS volumes. By default Atlas uses the same volume type.

The functioning of this book type is tied to volume size. As such when yous create your cluster on Atlas:

  • Ensure that the selected storage size is as big as your source mLab plan'south disk size in gild to maintain a similar throughput during the migration.
    • After successfully migrating if your Atlas cluster doesn't need as much disk throughput and is overprovisioned on disk, you tin seamlessly downgrade.
  • Do non check the "Provision IOPS" checkbox unless you are certain you would like this premium storage choice which is significantly more expensive. This will ensure that your target Atlas cluster too uses AWS'south gp2 volumes.

Google Cloud Platform (GCP) and Azure ii (AZR2)

mLab's Defended plans on GCP and Azure 2 utilize the aforementioned deejay blazon every bit Atlas. On GCP both use GCP's SSD Persistent Disks. On Azure both use Premium SSD Managed Disks.

The performance of these disk types is tied to disk size. Equally such when y'all create your cluster on Atlas:

  • Ensure that the selected storage size is as big as your source mLab program'south disk size in order to maintain a similar throughput during the migration.
    • After successfully migrating if your Atlas cluster doesn't need as much disk throughput and is overprovisioned on disk, you lot can seamlessly downgrade.

Azure Classic (AZR)

mLab's Azure Classic Dedicated plans use magnetic disks (Azure'southward page blobs and disks) while Atlas uses Premium SSD Managed Disks.

Deejay I/O will be significantly improved on your target Atlas cluster. As such when you create your cluster on Atlas:

  • Ensure that the selected storage size is every bit big equally your source mLab plan'due south disk size OR
  • Alternatively you can ensure that the selected storage size can fit your source mLab deployment. To summate the minimum storage size:
    • Take the "Size on Disk" value that yous see in the mLab panel (this is the storage footprint on disk of your deployment'southward data plus indexes). Divide that by 0.75 to provide room for some growth as well every bit the oplog, journal, other system resources.

Operational Limitations of the Atlas M0, M2, and M5 tiers

Although the Atlas costless-tier clusters (M0) offer more than storage than mLab Sandbox databases and although Atlas shared-tier clusters (M2/M5) offering more storage per $ than mLab's Shared plans, the Atlas M0, M2 and M5 tiers have operational limitations that y'all should be aware of. Most importantly:

  • Server-side JavaScript (e.g., $where and map-reduce) is not allowed.
  • The number of open connections cannot exceed 500.
  • There are limits on the rate of operations per second.
  • There are limits on how much data can exist transferred into or out of the cluster per week.

Note that the maximum number of operations per second and the corporeality of data that tin be transfered have been raised specifically for Atlas clusters migrating from mLab using the migration tool detailed in this guide. Please email support@mlab.com for more details.

We recommend reviewing all of the Atlas operational limitations:

  • https://docs.atlas.mongodb.com/reference/free-shared-limitations/#operational-limitations

If you are concerned about these limitations please email support@mlab.com with the deployment identifier for the mLab deployment you lot want to migrate and then that we can provide advice. Over again note that some limits take been raised specifically for Atlas clusters migrating from mLab.

Manually Creating the Target Atlas Cluster

The migration tool will enable you to automatically create the target Atlas cluster, but if y'all'd like to manually create information technology (not recommended), here are the steps:

  1. Ensure that the correct target project (inside the right organization) is selected.
  2. Click on the "Build a Cluster" or "Build a New Cluster" push to create an Atlas cluster, noting our recommendations in the tabular array beneath.
IMPORTANT:
Cloud Provider & Region Select the same region as the source mLab deployment.
Cluster Tier Size the target Atlas cluster to be at least as performant every bit the source mLab deployment in order to make the migration procedure as smoothen as possible. You'll exist able to seamlessly downgrade on Atlas within the Atlas Defended tiers (M10+) in one case the migration is complete.

Notation that the Atlas Alive Migration process is just available for the Atlas M10+ tiers.

Boosted Settings > Version On the M10+ tiers select the same MongoDB release version as the source mLab deployment (three.6).
Additional Settings > Backup On the M10+ tiers disable "Continuous Cloud Backup" unless yous want it. Information technology is non a characteristic that mLab had and is more expensive.
Cluster Name If you lot want to customize the name of your Atlas cluster, at present is the time. This name will be function of the cluster's connexion string, and y'all volition not exist able to alter it after.

Restrictions on Target Atlas Cluster Modifications

Once an Atlas cluster has been set as the target of a migration from mLab, y'all volition not be able to:

  • Change the cluster'southward release/major version
  • Migrate to a unlike cloud region
  • Downgrade the instance or storage size
  • Alter the number of shards in the Sharded Cluster

These restrictions are in place to help ensure a polish migration from mLab to Atlas. We recommend waiting 1-ii days or at to the lowest degree through a period of elevation traffic earlier making these types of changes to the target Atlas cluster. This way if there's an unexpected issue after migrating to Atlas, information technology will be much easier to decide the root cause.

You can lift the restriction on these types of cluster modifications if you cancel the migration. Notwithstanding in one case you've made these kinds of cluster modifications, the migration tool volition no longer let you to migrate to that cluster.

To cancel a migration:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leafage logo).
  3. Navigate to the System Dwelling house view (click on the green MongoDB leaf icon in the upper-left corner).
  4. From the left navigation select "mLab Account".
  5. Locate the mLab deployment that you want to migrate to Atlas.
    • Click the ellipses (…) button for that deployment.
    • Click "Cancel Migration" to cancel the migration process for the given deployment.

Testing Connectivity

A critical step to ensuring a migration to Atlas with very minimal reanimation is testing connectivity to the target Atlas cluster from all applications before cutting over from the source mLab deployment.

For each application that depends on this deployment:

  1. Log into a representative client car.
    • If you lot can't do this, ensure that you take the same network configuration on Atlas that you did on mLab.
  2. From that machine, download and install the mongo shell.
  3. From that machine, connect to your cluster using the verbal connection string that has been published in Atlas' UI.
  4. After connecting with the mongo crush, switch to one of the databases that this application will be using (e.thou., use mydb) so run a simple query to test your database credentials (e.thou., db.mycollection.findOne()). Ensure the mongo shell does not generate an "unauthorized" error. Note that before starting the migration procedure it's expected that your target Atlas cluster might exist empty. Information technology's not necessary for the database to be in order to exam your database credentials. A null result for querying a nonexistent database and collection is expected if database credentials are working.
  5. To also test application commuter compatibility, connect to the target Atlas cluster using your awarding code and driver with the aforementioned connection string.

Performing these tests will aid ensure that each of your applications has network connectivity and working database credentials.

If you are having trouble connecting to the target Atlas cluster please visit our troubleshooting connection issues guide for help.

Ensuring independence from the Heroku add-on'due south config var

This step is only relevant when migrating an mLab deployment that is being managed via mLab's Heroku add-on.

When you are done migrating your mLab Heroku add-on to Atlas, you will be deleting the Heroku add-on. Deleting the Heroku add-on volition not merely delete your mLab deployment but it will besides delete the Heroku config var that was automatically created when you provisioned the Heroku addition.

As such, it'south critical that you switch to a new Heroku config var before starting the migration procedure.

  1. Copy and paste the value of the existing config var into a brand-new config var (east.1000., called "DB_URI").
    • The existing config var that was automatically provisioned should await like "MONGODB_URI" or "MONGOLAB_URI" or "MONGOLAB_<colour>_URI".
  2. Change your application code such that it no longer uses the original config var and now uses the new config var.
  3. Redeploy your Heroku app with the code change.

mLab Heroku Add together-on FAQs

How do I know whether my Heroku app is dependent on a Heroku add-on config var?

  1. Log in to Heroku
  2. Select your app
  3. On the "Resource" tab, you'll see a list of the Heroku add-ons for this app, including the "mLab MongoDB" ones.
  4. For each "mLab MongoDB" improver, y'all'll run across "Fastened every bit MONGODB" or "Attached as MONGOLAB" or "Attached equally "MONGOLAB_<colour>". Suspend "_URI" to that string to get the corresponding Heroku config var name (see screenshots below).
  5. Search in your application code for usage of an environment variable by the config var's proper noun.

Screenshot from Heroku app's "Resources" tab:

img-heroku-resources-addons

Screenshot from Heroku app'south "Settings" tab:

img-heroku-config-vars

What if I am using Nightscout with an mLab Heroku add-on?

Delight follow our guide for migrating a Nightscout Sandbox Heroku Add-on to Atlas instead of using this one.

If you take a for-pay Heroku mLab add-on that you're using with Nightscout, delight email support@mlab.com for assistance.

Migration Process

The mLab->Atlas migration tool uses ii unlike import processes, depending on whether the target Atlas cluster is a shared-tier cluster or a dedicated-tier cluster.

  • mongodump/mongorestore procedure - used when migrating into Atlas shared-tier clusters (M0, M2, and M5)
  • Live Migration procedure - used when migrating into Atlas dedicated tier clusters (M10 and above)

mongodump/mongorestore procedure

The migration procedure used when migrating into the Atlas M0, M2, or M5 tiers

To drift a mLab Sandbox (costless) database to the Atlas M10 tier, you'll need to start migrate to the Atlas M0, M2, or M5 tier. From there you'll be able to upgrade to the Atlas M10 or higher up without needing to change your connection cord yet once more (7-9 minutes of reanimation).

  1. Navigate to the "mLab Account" view.
    • Ensure that the target Atlas system has been selected from the Organizations menu (the drop-down menu in the summit-left corner next to the MongoDB dark-green leaf logo).
    • Navigate to the Organization Home view (click on the dark-green MongoDB leaf icon in the upper-left corner).
    • From the left navigation select "mLab Business relationship".
  2. Start the migration procedure.
    • Locate the mLab deployment that you desire to start migrating to Atlas.
    • Click the ellipses (…) push for that deployment.
    • Click "Configure Migration" to start the migration wizard.
    • Follow the steps in the migration sorcerer to starting time the import of your data and indexes.
  3. Perform a test migration.
    • The "Test Migration" stride of the migration magician volition delete any data existing on the target Atlas cluster and then run a mongodump on your source mLab deployment followed by a mongorestore into the target Atlas cluster.
    • Nosotros strongly recommend performing a test run of the migration. The test run volition not only give you an guess for how long the process will take, but information technology will validate that the process volition work for your particular set up of data and indexes.
  4. Perform the real migration.
    • Exercise Not go along until you've tested connectivity from all applications (see "Testing Connectivity" section above).
    • Stop writes to the source mLab deployment by stopping all application clients.
    • Click "Brainstorm Migration" which will perform the aforementioned deportment as the test migration simply it will too change the function of your database user(s) on your source mLab deployment to be read-only (as a safeguard for ensuring that writes have stopped).
    • Wait for the import to complete with success.
    • Restart application clients with the target Atlas connection string.

Clicking "Begin Migration" is the existent migration! It will alter the part of the database user(s) which be on the source mLab deployment to be read-only.

If an unexpected error occurs, and yous want to abort the migration process and start using your source mLab deployment once again, you volition demand to recreate the database user(south) on your source mLab deployment through mLab'southward management portal. Performing a test migration and testing connectivity ahead of time will assistance avoid the need to practice this.

If you practise rollback to mLab, annotation that any writes that are made to the target Atlas cluster will not exist on the source mLab deployment and will be deleted if yous attempt some other migration afterward.

Below is a screenshot of what you'll come across when y'all reach Stride iv of the mongodump/mongorestore process.

img-atlas-migration-review.

Live Migration procedure

The migration process used when migrating into the Atlas M10 and above

Atlas tin perform a live migration of your deployment, keeping the target Atlas cluster in sync with the source mLab deployment until you cut your applications over to the target Atlas cluster.

This seamless process is bachelor for mLab Dedicated programme deployments as well as Shared plan deployments migrating to Atlas dedicated-tier clusters (M10 and in a higher place).

Because the target Atlas cluster will stay in sync with the source mLab deployment until you're ready to cut over your application's reads and writes to Atlas, the but reanimation necessary is when yous cease your application and and so restart it again with a new connexion string that points to Atlas.

Do not cutover to the target Atlas cluster until y'all have tested connectivity from all applications (see "Testing Connectivity" section above) and ensured that all writes accept stopped on the source mLab deployment.

Simply by executing the cutover to Atlas as documented in Steps 4 and 5 beneath tin can we guarantee that the migration will neither lose data nor introduce data consistency issues.

You cannot utilize the renameCollection command during the initial stage of the Live Migration process. Note that executing an assemblage pipeline that includes the $out aggregation stage or a map-reduce that includes an out will use renameCollection under the hood.

  1. Navigate to the "mLab Account" view.
    • Ensure that the target Atlas arrangement has been selected from the Organizations card (the drib-down menu in the top-left corner next to the MongoDB dark-green leafage logo).
    • Navigate to the Organization Abode view (click on the green MongoDB leaf icon in the upper-left corner).
    • From the left navigation select "mLab Account".
  2. Outset the migration process.
    • Locate the mLab deployment that yous want to start migrating to Atlas.
    • Click the ellipses (…) push for that deployment.
    • Click "Configure Migration" to offset the migration magician.
    • Follow the steps in the migration wizard to start the Alive Migration process. Annotation that this process will delete any existing data on the target Atlas cluster earlier it starts syncing.
  3. Await until the migration process is fix for cutover (how long volition information technology take?).
    • When Atlas detects that the source and target clusters are in sync, information technology starts an extendable 72-60 minutes timer to begin the cutover procedure. If the 72-60 minutes flow passes, Atlas stops synchronizing with the source cluster. You tin extend the time remaining by 24 hours by clicking the "Extend time" hyperlink any number of times.
    • The "Prepare to Cutover" button will now be clickable. Information technology'southward normal during this time for the optime difference counter to show 1-2 seconds of lag.
    • Atlas users with the "Projection Owner" function will receive a notification when the Alive Migration is ready for cutover.
  4. Ready to cutover.
    • Click the "Prepare to Cutover" button to open up a walk-through screen with instructions on how to go on.
    • Copy and paste the connection string for your target Atlas cluster and prepare to change your connectedness cord.
    • Do NOT proceed until you've tested connectivity from all applications (run into "Testing Connectivity" section above).
  5. Cutover.
    • Terminate writes to the source mLab deployment by stopping all application clients.
    • Wait for the optime gap to accomplish 0 (this should be extremely quick/instantaneous).
    • Restart awarding clients with the target Atlas connection string.
    • Click the "Cut over" button to end the target Atlas cluster from syncing from the source mLab deployment.

See Often Asked Questions (FAQ) about the Alive Migration process beneath.

Below is a screenshot of what you'll see when you attain Pace 4 of the Live Migration process and click the "Prepare to Cutover" button.

img-atlas-prepare-to-cutover.

Live Migration Process FAQs

Q. Is the sync one-way or multi-directional?

The Live Migration process will read from the source mLab deployment and write to the target Atlas cluster. This process is i-fashion and not multi-directional.

Q. Will the source mLab deployment be bachelor during the sync?

Yeah. After you outset the Live Migration process, your application can continue to read from and write to the source mLab deployment every bit it e'er has. Even after you have cut over your application to Atlas, your source mLab deployment volition remain until you have manually deleted information technology.

Q. Which mLab node volition the Alive Migration procedure sync from?

The Live Migration process will sync from the Principal node of the source mLab deployment and add together additional load equally it copies your data to Atlas (this is substantially a collection browse on every collection).

A deployment which is well-indexed and performing well on mLab should be able to handle this without any problems. By ensuring a healthy, well-sized deployment prior to migrating, you can dramatically reduce the hazard of migration failures and delays. You'll too be in a much better position to handle future growth on Atlas and to more than efficiently use your database resource (which in the long run volition lower your costs).

If yous practise face an unexpected outcome after you lot start the Alive Migration process, you tin can cancel it immediately to stop the syncing. One time you have stopped the process, please email mLab Support (back up@mlab.com) for help so that we tin advise you lot on side by side steps.

Q. How long does the Live Migration process take?

The Live Migration process must complete the following four phases before it's gear up for cutover:

  • Phase ane: Copy data
  • Phase 2: Catch upwards on the oplog
  • Stage 3: Build indexes
  • Phase 4: Take hold of up on the oplog

During these four phases, your awarding can go along to read and write from the source mLab deployment as it normally does. The merely downtime that the Live Migration process requires is the time information technology takes for you to consummate the cutover steps - for almost customers this is only a few minutes.

Once Phase iv is complete, you'll encounter that the "Prepare for Cutover" button is enabled. The users with the "Projection Possessor" part volition also receive an electronic mail notification. By default you lot take simply 72 hours to cut over your application simply this window can be extended whatsoever number of times by 24 hours using the "Extend time" link.

The total time that these four phases take depends greatly on the size of the data and indexes as well as the resources available on the source mLab deployment and the target Atlas cluster.

That said, fifty-fifty for a large dataset (hundreds of gigabytes), Phase 1 (copy data) generally completes inside 6 hours. If the source mLab deployment is running on Azure Classic (magnetic, non-SSD disks), this can be much longer.

Phase iii (build indexes) is the almost difficult to estimate and depends on the RAM available on the target Atlas cluster too equally the number and sizes of the indexes. For a deployment with a full index size over xv GB, it would not exist surprising if this phase takes on the order of days. If you lot're concerned about timing, we recommend ensuring that the target Atlas cluster has at to the lowest degree enough RAM to hold the full size of indexes. Atlas pro-rates charges by the day, and you can downgrade seamlessly later on y'all take successfully cut over your application to Atlas.

During Phase iii (build indexes) the Secondary node(s) may appear to be downward in the Atlas UI - this is normal. The Live Migration procedure builds indexes on the Secondary nodes in the foreground during which fourth dimension Atlas' infrastructure is unable to connect. Yet, if you download the database server log for one of the Secondary nodes yous can view index build progress.

Please email support@mlab.com if you would like us to help y'all monitor the progress of a Live Migration.

Q. Can I perform a trial/examination Live Migration?

Yep. When you drift into an Atlas dedicated-tier cluster (M10 and to a higher place), in that location is no explicit step in the migration wizard to examination the migration.

Still, one time the Live Migration procedure is ready for cutover, you can abolish it at any time in guild to cease the syncing process. Cancelling information technology is the same verbal thing every bit clicking the "Cut over" button except that there's no validation to help ensure that writes take stopped on the source mLab deployment.

If you lot click the "Restart Migration" push button, the Live Migration process volition kickoff again (from scratch). You tin exercise this whatever number of times in order to test the process.

Q. I missed the cutover window and at present the Live Migration process has errored out. What practice I do?

Bold you have non started reading or writing from the target Atlas cluster, you would click on the "Restart Migration" push button from the migration checklist. Restarting the migration process will delete all information on the target Atlas cluster before starting the sync again.

Going forward notation that by default you have 72 hours to cut over your application merely this cutover window can exist extended by 24 hours any number of times using the "Extend fourth dimension" link.

When the Live Migration process is ready for cutover again, click on the "Gear up to Cutover" button and follow the instructions on that screen.

Q. When cutting over to Atlas during the Live Migration process, practise I need to end my application?

We strongly recommend that yous perform the cutover step of the Live Migration process by stopping all writes to the source mLab deployment earlier clicking on the "Cut over" button and directing traffic to the target Atlas cluster. Only past executing the cutover using this strategy can nosotros guarantee that the migration will neither lose data nor introduce data consistency issues.

Nevertheless, if your awarding cannot withstand even a few minutes of reanimation, you lot may be seeking a method by which you tin cut over to Atlas without stopping your application.

A custom cutover strategy is possible, only y'all volition be responsible for reasoning through all of the possible issues that could ascend from that strategy. Such assay is very specific to the style your application works, and MongoDB can make no guarantees near the correctness of the migration in these cases.

Important things to know about the Live Migration procedure for a Replica Set:

  • The "Cut over" button simply stops replication from the source to the target. It does not do anything else that's meaningtwo. This button will not be clickable if the Alive Migration process detects writes on the source mLab deployment.
  • The "Abolish" button does the same exact thing as the "Cut over" button except that there's no validation to help ensure that writes take stopped on the source mLab deployment.
  • The migration process will never destroy or impede traffic to the source mLab deployment. Even afterward the cutover button is pressed you lot may read and write to the source.
  • The migration process will never impede traffic to the target Atlas cluster. Even before the cutover button is pressed you may read and write to the target.

If you decide to allow your application to read and/or write to both the source and the target during the cutover phase you will be responsible for ensuring that your application can handle any event of this strategy. For case, your awarding may lose information, unintentionally duplicate data, generate inconsistent data, or behave in unexpected means.

This FAQ is applicative simply if the target Atlas cluster is a Replica Set. When migrating a Sharded Cluster the target cluster is non available on the network for 3-v minutes afterward the cutover button is pressed and replication from the source to the destination has stopped.

Q. How can I be certain that my app has stopped writing to the source mLab deployment earlier I cut over to Atlas?

During the Live Migration procedure, the "Cutting over" button will not be enabled unless the last optime of the source mLab deployment and the target Atlas cluster are the aforementioned (i.e., until the optime deviation is 0). This validation is in place to help you ensure that writes have stopped against the source mLab deployment before yous cut over to Atlas.

If you would as well like to check yourself, you can authenticate every bit admin database user run an oplog query confronting the Primary of the source mLab deployment to see the timestamp of the virtually recent write. An instance follows (assuming you lot've connected to the Chief node using the mongo beat out).

Annotation: This query will not work if the source mLab deployment has a TTL index that is actively deleting.

            rs-ds123456:Chief> use local switched to db local  rs-ds123456:Primary> db.oplog.rs.find( {"op":{"$ne":"due north"}, "ns": {"$nin": ["config.arrangement.sessions", "config.transactions", "admin.system.users"]}}, {"ts":one, "op": 1, "ns": 1} ).sort( {"$natural": -1} ).limit(1) { "ts" : Timestamp(1579134905, 31), "op" : "u", "ns" : "mydb.mycoll" }  rs-ds123456:Master> new Date(1579134905 * 1000) ISODate("2020-01-16T00:35:05Z")                      

Q. I have scheduled the migration for a certain time. Will I have support?

The Alive Migration process can take a substantial corporeality of time prior to it being set for cutover, so we recommend starting the migration process well in advance of your scheduled maintenance and then that you will have conviction that it will be ready for cutover when you need it to exist.

When the process is prepare for cutover, y'all can perform Steps four and 5 of the official Live Migration process at the best time for your team and application.

Note that it's critical that you examination connectivity from all of your application clients earlier you lot outset the cutover procedure to Atlas.

If you experience an emergency outcome when you actually perform the cutover steps, the source mLab deployment will all the same be available, fully intact. Equally such in example of a true emergency y'all tin either:

  • Roll back to the source mLab deployment. Note that if y'all whorl back to the source mLab deployment if you had previously written any data directly to the target Atlas cluster, those changes will not be on the original mLab deployment OR
  • Ensure that you have activated a discounted Atlas support plan; if yous haven't washed so yet, you may do it from the "Atlas Support Plan" tab of the "mLab Account" view. Then open an urgent Atlas support instance (see the "Programmer & Premium Support Plans" tab in the documentation.

Deleting the source mLab deployment

To stop incurring charges on mLab for a for-pay deployment, you must manually ensure that the mLab deployment has been deleted. We recommend deleting the source mLab deployment (via mLab's UI) as soon as yous are confident that it has been successfully migrated to Atlas, and you no longer need it.

Note that mLab bills at the start of each month for all chargeable services provided in the prior month, then you volition notwithstanding exist charged one additional time fifty-fifty after your mLab account stops incurring charges.

hedgeshadge1949.blogspot.com

Source: https://docs.mlab.com/how-to-migrate-to-atlas/

0 Response to "An Error Occurred While Querying Your Mongodb Deployment Please Try Again in a Few Minutes"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel