You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/disaster-recovery.txt
+13-14Lines changed: 13 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ Use an Odd Number of Replica Set Members
66
66
To elect a :manual:`primary </core/replica-set-members>`, you need a majority of :manual:`voting </core/replica-set-elections>` replica set members available. We recommend that you create replica sets with an
67
67
odd number of voting replica set members. There is no benefit in having
68
68
an even number of voting replica set members. |service| satisfies this
69
-
requirement by default, as |service| requires having 3,5, or 7 nodes.
69
+
requirement by default, as |service| requires having 3,5, or 7 nodes.
70
70
71
71
Fault tolerance is the number of replica set members that can become
72
72
unavailable with enough members still available for a primary election.
@@ -91,7 +91,7 @@ availability zones. This way you can have multiple separate physical data
91
91
centers, each in its own availability zone and in the same region.
92
92
93
93
This section aims to illustrate the need for a deployment with five data centers.
94
-
To begin, we will consider deployments with two and three data centers first.
94
+
To begin, consider deployments with two and three data centers.
95
95
96
96
Consider the following diagram, which shows data distributed across
97
97
two data centers:
@@ -129,8 +129,8 @@ automatically when you deploy a dedicated cluster to a region that supports avai
129
129
availability zones. For example, for a three-node replica set {+cluster+} deployed to a three-availability-zone region, {+service+} deploys one node
130
130
in each zone. A local failure in the data center hosting one node doesn't
131
131
impact the operation of data centers hosting the other nodes because MongoDB
132
-
performs automatic failover and leader election. Applications will
133
-
automatically recover in the event of local failures.
132
+
performs automatic failover and leader election. Applications automatically
133
+
recover in the event of local failures.
134
134
135
135
We recommend that you deploy replica sets to the following regions because they support at least three availability zones:
136
136
@@ -155,8 +155,8 @@ Use ``mongos`` Redundancy for Sharded {+Clusters+}
155
155
When a client connects to a sharded {+cluster+}, we recommend that you include multiple :manual:`mongos </reference/program/mongos/>`
156
156
processes, separated by commas, in the connection URI. To learn more,
157
157
see :manual:`MongoDB Connection String Examples </reference/connection-string-examples/#self-hosted-replica-set-with-members-on-different-machines>`.
158
-
This allows operations to route to different ``mongos`` instances for load
159
-
balancing, but it is also important for disaster recovery.
158
+
This setup allows operations to route to different ``mongos`` instances
159
+
for load balancing, but it is also important for disaster recovery.
160
160
161
161
Consider the following diagram, which shows a sharded {+cluster+}
162
162
spread across three data centers. The application connects to the {+cluster+} from a remote location. If Data Center 3 becomes unavailable, the application can still connect to the ``mongos``
@@ -250,9 +250,9 @@ Ensure that you perform MongoDB major version upgrades far before your
250
250
current version reaches `end of life <https://www.mongodb.com/legal/support-policy/lifecycles>`__.
251
251
252
252
You can't downgrade your MongoDB version using the {+atlas-ui+}. Because of this,
253
-
we recommend that you work directly with MongoDB Professional or Technical Services
254
-
when planning and executing a major version upgrade. This will help you avoid
255
-
any issues that might occur during the upgrade process.
253
+
when planning and executing a major version upgrade, we recommend that you
254
+
work directly with MongoDB Professional or Technical Services to help you
255
+
avoid any issues that might occur during the upgrade process.
256
256
257
257
Disaster Recovery Recommendations
258
258
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -348,7 +348,7 @@ unavailable, follow these steps to bring your deployment back online:
348
348
349
349
.. step:: Determine when the cloud provider outage began.
350
350
351
-
You will need this information later in this procedure to restore your deployment.
351
+
You need this information later in this procedure to restore your deployment.
352
352
353
353
.. step:: Identify the alternative cloud provider you would like to deploy your new {+cluster+} on
354
354
@@ -374,7 +374,7 @@ unavailable, follow these steps to bring your deployment back online:
374
374
.. step:: Switch any applications that connect to the old {+cluster+} to the newly-created {+cluster+}
375
375
376
376
To find the new connection string, see :atlas:`Connect via Drivers </driver-connection>`.
377
-
Review your application stack as you will likely need to redeploy it onto the new cloud provider.
377
+
Review your application stack as you likely need to redeploy it onto the new cloud provider.
378
378
379
379
.. _arch-center-atlas-outage:
380
380
@@ -383,7 +383,7 @@ unavailable, follow these steps to bring your deployment back online:
383
383
384
384
In the highly unlikely event that the {+service+} Control Plane and
385
385
the {+atlas-ui+} are unavailable, your {+cluster+} is still available and accessible.
386
-
To lear more, see `Platform Reliability <https://www.mongodb.com/products/platform/trust#reliability>`__.
386
+
To learn more, see `Platform Reliability <https://www.mongodb.com/products/platform/trust#reliability>`__.
387
387
Open a high-priority :atlas:`support ticket </support/#request-support>`
388
388
to investigate this further.
389
389
@@ -450,10 +450,9 @@ Deletion of Production Data
450
450
451
451
Production data might be accidentally deleted due to human error or a bug
452
452
in the application built on top of the database.
453
-
If the cluster itself was accidentally deleted, Atlas may have retained
453
+
If the cluster itself was accidentally deleted, Atlas might retain
454
454
the volume temporarily.
455
455
456
-
457
456
If the contents of a collection or database have been deleted, follow these steps to restore your data:
0 commit comments