Skip to content

Commit 46a767d

Browse files
authored
Edits for disaster recovery, from the previous PR #104 (#109)
* Edits for the previous PR * Edit
1 parent 150f01d commit 46a767d

File tree

1 file changed

+13
-14
lines changed

1 file changed

+13
-14
lines changed

source/disaster-recovery.txt

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Use an Odd Number of Replica Set Members
6666
To elect a :manual:`primary </core/replica-set-members>`, you need a majority of :manual:`voting </core/replica-set-elections>` replica set members available. We recommend that you create replica sets with an
6767
odd number of voting replica set members. There is no benefit in having
6868
an even number of voting replica set members. |service| satisfies this
69-
requirement by default, as |service| requires having 3,5, or 7 nodes.
69+
requirement by default, as |service| requires having 3, 5, or 7 nodes.
7070

7171
Fault tolerance is the number of replica set members that can become
7272
unavailable with enough members still available for a primary election.
@@ -91,7 +91,7 @@ availability zones. This way you can have multiple separate physical data
9191
centers, each in its own availability zone and in the same region.
9292

9393
This section aims to illustrate the need for a deployment with five data centers.
94-
To begin, we will consider deployments with two and three data centers first.
94+
To begin, consider deployments with two and three data centers.
9595

9696
Consider the following diagram, which shows data distributed across
9797
two data centers:
@@ -129,8 +129,8 @@ automatically when you deploy a dedicated cluster to a region that supports avai
129129
availability zones. For example, for a three-node replica set {+cluster+} deployed to a three-availability-zone region, {+service+} deploys one node
130130
in each zone. A local failure in the data center hosting one node doesn't
131131
impact the operation of data centers hosting the other nodes because MongoDB
132-
performs automatic failover and leader election. Applications will
133-
automatically recover in the event of local failures.
132+
performs automatic failover and leader election. Applications automatically
133+
recover in the event of local failures.
134134

135135
We recommend that you deploy replica sets to the following regions because they support at least three availability zones:
136136

@@ -155,8 +155,8 @@ Use ``mongos`` Redundancy for Sharded {+Clusters+}
155155
When a client connects to a sharded {+cluster+}, we recommend that you include multiple :manual:`mongos </reference/program/mongos/>`
156156
processes, separated by commas, in the connection URI. To learn more,
157157
see :manual:`MongoDB Connection String Examples </reference/connection-string-examples/#self-hosted-replica-set-with-members-on-different-machines>`.
158-
This allows operations to route to different ``mongos`` instances for load
159-
balancing, but it is also important for disaster recovery.
158+
This setup allows operations to route to different ``mongos`` instances
159+
for load balancing, but it is also important for disaster recovery.
160160

161161
Consider the following diagram, which shows a sharded {+cluster+}
162162
spread across three data centers. The application connects to the {+cluster+} from a remote location. If Data Center 3 becomes unavailable, the application can still connect to the ``mongos``
@@ -250,9 +250,9 @@ Ensure that you perform MongoDB major version upgrades far before your
250250
current version reaches `end of life <https://www.mongodb.com/legal/support-policy/lifecycles>`__.
251251

252252
You can't downgrade your MongoDB version using the {+atlas-ui+}. Because of this,
253-
we recommend that you work directly with MongoDB Professional or Technical Services
254-
when planning and executing a major version upgrade. This will help you avoid
255-
any issues that might occur during the upgrade process.
253+
when planning and executing a major version upgrade, we recommend that you
254+
work directly with MongoDB Professional or Technical Services to help you
255+
avoid any issues that might occur during the upgrade process.
256256

257257
Disaster Recovery Recommendations
258258
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -348,7 +348,7 @@ unavailable, follow these steps to bring your deployment back online:
348348

349349
.. step:: Determine when the cloud provider outage began.
350350

351-
You will need this information later in this procedure to restore your deployment.
351+
You need this information later in this procedure to restore your deployment.
352352

353353
.. step:: Identify the alternative cloud provider you would like to deploy your new {+cluster+} on
354354

@@ -374,7 +374,7 @@ unavailable, follow these steps to bring your deployment back online:
374374
.. step:: Switch any applications that connect to the old {+cluster+} to the newly-created {+cluster+}
375375

376376
To find the new connection string, see :atlas:`Connect via Drivers </driver-connection>`.
377-
Review your application stack as you will likely need to redeploy it onto the new cloud provider.
377+
Review your application stack as you likely need to redeploy it onto the new cloud provider.
378378

379379
.. _arch-center-atlas-outage:
380380

@@ -383,7 +383,7 @@ unavailable, follow these steps to bring your deployment back online:
383383

384384
In the highly unlikely event that the {+service+} Control Plane and
385385
the {+atlas-ui+} are unavailable, your {+cluster+} is still available and accessible.
386-
To lear more, see `Platform Reliability <https://www.mongodb.com/products/platform/trust#reliability>`__.
386+
To learn more, see `Platform Reliability <https://www.mongodb.com/products/platform/trust#reliability>`__.
387387
Open a high-priority :atlas:`support ticket </support/#request-support>`
388388
to investigate this further.
389389

@@ -450,10 +450,9 @@ Deletion of Production Data
450450

451451
Production data might be accidentally deleted due to human error or a bug
452452
in the application built on top of the database.
453-
If the cluster itself was accidentally deleted, Atlas may have retained
453+
If the cluster itself was accidentally deleted, Atlas might retain
454454
the volume temporarily.
455455

456-
457456
If the contents of a collection or database have been deleted, follow these steps to restore your data:
458457

459458
.. procedure::

0 commit comments

Comments
 (0)