Skip to content

Commit 2e471d5

Browse files
authored
(DOCSP-43347) first draft (#28)
* (DOCSP-43347) first draft * (DOCSP-43347) finalizes high availability first draft * (DOCSP-43347) pushing again to kick off netlify * (DOCSP-43347) copy feedback * (DOCSP-43347) fixes hyperlinks
1 parent ea2a0c7 commit 2e471d5

File tree

2 files changed

+281
-9
lines changed

2 files changed

+281
-9
lines changed

source/high-availability.txt

Lines changed: 280 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,21 +9,124 @@ High Availability
99
.. contents:: On this page
1010
:local:
1111
:backlinks: none
12-
:depth: 1
12+
:depth: 2
1313
:class: onecol
1414

15-
Intro statement
15+
MongoDB |service| allows you to create cluster configurations that meet your
16+
specific availability needs while satisfying your security and compliance
17+
requirements.
1618

17-
{+service+} Features and Best Practices for High Availability
18-
-------------------------------------------------------------
19+
Consult this page to plan the appropriate cluster configuration that optimizes
20+
your availability and performance while aligning with your enterprise's
21+
billing and access needs.
1922

20-
Content here
23+
{+service+} Features and Recommendations for High Availability
24+
---------------------------------------------------------------
25+
26+
Features
27+
~~~~~~~~
28+
29+
When you launch a new cluster in |service|, {+service+} automatically configures a
30+
minimum three-node replica set and distributes it across availability zones. If a
31+
primary member experiences an outage, |service| automatically detects this failure
32+
and elects a secondary member as a replacement, and promotes this secondary member
33+
to become the new primary. |service| then restores or replaces the failing member
34+
to ensure that the cluster is returned to its target configuration as soon as possible.
35+
The MongoDB client driver also automatically switches all client connections. The
36+
entire selection and failover process happens within seconds, without manual
37+
intervention. MongoDB optimizes the algorithms used to detect failure and elect a
38+
new primary, reducing the failover interval.
39+
40+
[insert diagram]
41+
42+
Clusters that you deploy within a single region are spread across availability
43+
zones within that region, so that they can withstand partial region outages without
44+
an interruption of read or write availability. You can optionally choose to spread
45+
your clusters across two or more regions for greater resilience and
46+
:ref:`workload isolation <create-cluster-multi-region>`.
47+
48+
Deploying a cluster to three or more regions ensures that the cluster can withstand
49+
a full region-level outage while maintaining read and write availability, provided
50+
the application layer is fault-tolerant. If maintaining write operations in your
51+
preferred region at all times is a high priority, MongoDB recommends deploying the
52+
cluster so that at least two electable members are in at least two data centers within
53+
your preferred region.
54+
55+
For the best database performance in a worldwide deployment, users can configure a
56+
:ref:`global cluster <global-clusters>` which uses location-aware sharding to minimize
57+
read and write latency. If you have geographical storage requirements, you can also
58+
ensure that {+service+} stores data in a particular geographical area.
59+
60+
Recommended Deployment Topologies
61+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
62+
63+
Single Region, 3 Node Replica Set / Shard (PSS)
64+
````````````````````````````````````````````````
65+
This topology is appropriate if low latency is required but high availability requirements
66+
are average. This topology can tolerate any 1 node failure, easily satify majority write
67+
concern with in-region secondaries, maintain a primary in the preferred region upon any node
68+
failure, is inexpensive, and the least complicated. The reads from secondaries in this topology
69+
encounter low latency in your preferred region.
70+
71+
This topology, however, can't tolerate a regional outage.
72+
73+
2-Region, 3 Node Replica Set / Shard (PS - S)
74+
``````````````````````````````````````````````
75+
This topology is a highly available cluster configuration, with strong performance and availability.
76+
This topology can easily satisfy majority write concern with in-region secondary, tolerate any
77+
1 node failure, maintain a primary in the preferred region upon any node failure, is inexpensive,
78+
and can tolerate regional outage from the least preferred region. The reads from one secondary in
79+
this topology also encounter low latency in the preferred region.
80+
81+
This topology, however, can't tolerate a regional outage from a preferred region without manual
82+
intervention, such as adding additional nodes to the least-preferred region. Furthermore, the reads
83+
from secondaries in a specific region is limited to a single node, and there is potential data loss
84+
during regional outage.
85+
86+
3-Region, 3 Node Replica Set / Shard (P - S - S)
87+
``````````````````````````````````````````````````
88+
This topology is a typical multi-regional topology where high availability and data durability are
89+
more important than write latency performance. This topology can tolerate any 1 node failure, any
90+
1 region outage, and is inexpensive.
91+
92+
This topology, however, experiences higher latency because its majority write concern must replicate
93+
to a second region. Furthermore, a primary node failure shifts the primary out of the preferred region,
94+
and reads from secondaries always exist in a different region compared to the primary, and may exist
95+
outside of the preferred region.
96+
97+
3-Region, 5 Node Replica Set / Shard (PS - SS - S)
98+
````````````````````````````````````````````````````
99+
This topology is a typical multi-regional topology for mission-critical applications that require
100+
in-region availability. This topology can tolerate any 2 nodes' failure, tolerate primary node failure
101+
while keeping the new primary in the preferred region, and tolerate any 1 region outage.
102+
103+
This topology, however, experiences higher latency because its majority write concern must replicate
104+
to a second region. Furthermore, reads from secondaries always exist in a different region compared
105+
to the primary, and may exist outside of the preferred region.
106+
107+
3-Region, 7 Node Replica Set / Shard (PSS - SSS - S)
108+
`````````````````````````````````````````````````````
109+
This topology is a non-typical multi-regional topology for mission-critical applications that require
110+
in-region availability. This topology can tolerate any 3 nodes' failure, tolerate primary node failure
111+
and a secondary node failure in the preferred region, and tolerate any 1 region outage.
112+
113+
This topology, however, experiences higher latency because its majority write concern must replicate
114+
to a second region.
115+
116+
5-Region, 9 Node Replica Set / Shard (PS - SS - SS - SS - S)
117+
`````````````````````````````````````````````````````````````
118+
This topology is a non-typical multi-regional topology for the most availability-demanding applications
119+
that require in-region availability. This topology can tolerate any 4 nodes' failure, tolerate primary
120+
node failure while keeping the new primary in the preferred region, and tolerate any 2 region outages.
121+
122+
This topology, however, experiences higher latency because its majority write concern must replicate
123+
to two additional regions (3 regions in total).
21124

22125
Examples
23126
--------
24127

25-
The following examples <perform this action> using |service|
26-
:ref:`tools for automation <arch-center-automation>`.
128+
The following examples configure the Single Region, 3 Node Replica Set / Shard
129+
deployment topology using |service| :ref:`tools for automation <arch-center-automation>`.
27130

28131
These examples also apply other recommended configurations, including:
29132

@@ -44,9 +147,177 @@ These examples also apply other recommended configurations, including:
44147
.. tab:: CLI
45148
:tabid: cli
46149

47-
Content here
150+
.. note::
151+
152+
Before you
153+
can create resources with the {+atlas-cli+}, you must:
154+
155+
- :atlas:`Create your paying organization
156+
</billing/#configure-a-paying-organization>` and :atlas:`create an API key </configure-api-access/>` for the
157+
paying organization.
158+
- :atlascli:`Install the {+atlas-cli+} </install-atlas-cli/>`
159+
- :atlascli:`Connect from the {+atlas-cli+}
160+
</connect-atlas-cli/>` using the steps for :guilabel:`Programmatic Use`.
161+
162+
Create One Deployment Per Project
163+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
164+
165+
.. tabs::
166+
167+
.. tab:: Dev and Test Environments
168+
:tabid: devtest
169+
170+
For your development and testing environments, run the following command for each project. Change
171+
the IDs and names to use your values:
172+
173+
.. include:: /includes/examples/cli-example-create-clusters-devtest.rst
174+
175+
.. tab:: Staging and Prod Environments
176+
:tabid: stagingprod
177+
178+
For your staging and production environments, create the following ``cluster.json`` file for each project.
179+
Change the IDs and names to use your values:
180+
181+
.. include:: /includes/examples/cli-json-example-create-clusters.rst
182+
183+
After you create the ``cluster.json`` file, run the
184+
following command for each project. The
185+
command uses the ``cluster.json`` file to create a cluster.
186+
187+
.. include:: /includes/examples/cli-example-create-clusters-stagingprod.rst
188+
189+
For more configuration options and info about this example,
190+
see :ref:`atlas-clusters-create`.
48191

49192
.. tab:: Terraform
50193
:tabid: Terraform
51194

52-
Content here
195+
.. note::
196+
197+
Before you
198+
can create resources with Terraform, you must:
199+
200+
- :atlas:`Create your paying organization
201+
</billing/#configure-a-paying-organization>` and :atlas:`create an API key </configure-api-access/>` for the
202+
paying organization. Store your API key as environment
203+
variables by running the following command in the terminal:
204+
205+
.. code-block::
206+
207+
export MONGODB_ATLAS_PUBLIC_KEY="<insert your public key here>"
208+
export MONGODB_ATLAS_PRIVATE_KEY="<insert your private key here>"
209+
210+
- `Install Terraform
211+
<https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli>`__
212+
213+
Create the Projects and Deployments
214+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
215+
216+
.. tabs::
217+
218+
.. tab:: Dev and Test Environments
219+
:tabid: devtest
220+
221+
For your development and testing environments, create the
222+
following files for each application and environment
223+
pair. Place the files for each application and environment
224+
pair in their own directory. Change the IDs and names to use your values:
225+
226+
main.tf
227+
```````
228+
229+
.. include:: /includes/examples/tf-example-main-devtest.rst
230+
231+
variables.tf
232+
````````````
233+
234+
.. include:: /includes/examples/tf-example-variables.rst
235+
236+
terraform.tfvars
237+
````````````````
238+
239+
.. include:: /includes/examples/tf-example-tfvars-devtest.rst
240+
241+
provider.tf
242+
```````````
243+
244+
.. include:: /includes/examples/tf-example-provider.rst
245+
246+
After you create the files, navigate to each application and environment pair's directory and run the following
247+
command to initialize Terraform:
248+
249+
.. code-block::
250+
251+
terraform init
252+
253+
Run the following command to view the Terraform plan:
254+
255+
.. code-block::
256+
257+
terraform plan
258+
259+
Run the following command to create one project and one deployment for the application and environment pair. The command uses the files and the |service-terraform| to
260+
create the projects and clusters:
261+
262+
.. code-block::
263+
264+
terraform apply
265+
266+
When prompted, type ``yes`` and press :kbd:`Enter` to apply
267+
the configuration.
268+
269+
.. tab:: Staging and Prod Environments
270+
:tabid: stagingprod
271+
272+
For your staging and production environments, create the
273+
following files for each application and environment
274+
pair. Place the files for each application and environment
275+
pair in their own directory. Change the IDs and names to use your values:
276+
277+
main.tf
278+
```````
279+
280+
.. include:: /includes/examples/tf-example-main-stagingprod.rst
281+
282+
variables.tf
283+
````````````
284+
285+
.. include:: /includes/examples/tf-example-variables.rst
286+
287+
terraform.tfvars
288+
````````````````
289+
290+
.. include:: /includes/examples/tf-example-tfvars-stagingprod.rst
291+
292+
provider.tf
293+
```````````
294+
295+
.. include:: /includes/examples/tf-example-provider.rst
296+
297+
After you create the files, navigate to each application and environment pair's directory and run the following
298+
command to initialize Terraform:
299+
300+
.. code-block::
301+
302+
terraform init
303+
304+
Run the following command to view the Terraform plan:
305+
306+
.. code-block::
307+
308+
terraform plan
309+
310+
Run the following command to create one project and one deployment for the application and environment pair. The command uses the files and the |service-terraform| to
311+
create the projects and clusters:
312+
313+
.. code-block::
314+
315+
terraform apply
316+
317+
When prompted, type ``yes`` and press :kbd:`Enter` to apply
318+
the configuration.
319+
320+
For more configuration options and info about this example,
321+
see |service-terraform| and the `MongoDB Terraform Blog Post
322+
<https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-terraform-aws/>`__.
323+

source/includes/shared-settings-clusters-devtest.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
- Cluster tier set to ``M10`` for a dev/test environment. Use the
33
:ref:`cluster size guide <arch-center-cluster-size-guide>` to learn
44
the recommended cluster tier for your application size.
5+
- Uses the Single Region, 3 Node Replica Set / Shard deployment topology.
56

67
Our examples use |aws|, |azure|, and {+gcp+}
78
interchangeably. You can use any of these three cloud providers, but

0 commit comments

Comments
 (0)