@@ -14,7 +14,7 @@ High Availability
14
14
15
15
Consult this page to plan the appropriate cluster configuration that optimizes
16
16
your availability and performance while aligning with your enterprise's
17
- billing and access needs.
17
+ cost controls and access needs.
18
18
19
19
{+service+} Features and Recommendations for High Availability
20
20
---------------------------------------------------------------
@@ -23,8 +23,8 @@ Features
23
23
~~~~~~~~
24
24
25
25
When you launch a new cluster in |service|, {+service+} automatically configures a
26
- minimum three-node replica set and distributes it across availability zones . If a
27
- primary member experiences an outage, |service| automatically detects this failure
26
+ minimum three-node replica set and distributes it in the region you deploy to . If a
27
+ primary member experiences an outage, the MongoDB database automatically detects this failure
28
28
and elects a secondary member as a replacement, and promotes this secondary member
29
29
to become the new primary. |service| then restores or replaces the failing member
30
30
to ensure that the cluster is returned to its target configuration as soon as possible.
@@ -56,67 +56,37 @@ ensure that {+service+} stores data in a particular geographical area.
56
56
Recommended Deployment Topologies
57
57
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58
58
59
- Single Region, 3 Node Replica Set / Shard (PSS )
60
- ````````````````````````````````````````````````
59
+ Single Region, 3 Node Replica Set / Shard (Primary Secondary Secondary )
60
+ ```````````````````````````````````````````````````````````````````````
61
61
This topology is appropriate if low latency is required but high availability requirements
62
- are average . This topology can tolerate any 1 node failure, easily satify majority write
63
- concern with in- region secondaries, maintain a primary in the preferred region upon any node
64
- failure, is inexpensive , and the least complicated. The reads from secondaries in this topology
65
- encounter low latency in your preferred region.
62
+ are limited to a single region . This topology can tolerate any 1 node failure and easily satisfy majority write
63
+ concern within region secondaries. This will maintain a primary in the preferred region upon any node
64
+ failure, limits cost , and is the least complicated from an application architecture perspective.
65
+ The reads and writes from secondaries in this topology encounter low latency in your preferred region.
66
66
67
67
This topology, however, can't tolerate a regional outage.
68
68
69
- 2-Region, 3 Node Replica Set / Shard (PS - S)
70
- ``````````````````````````````````````````````
71
- This topology is a highly available cluster configuration, with strong performance and availability.
72
- This topology can easily satisfy majority write concern with in-region secondary, tolerate any
73
- 1 node failure, maintain a primary in the preferred region upon any node failure, is inexpensive,
74
- and can tolerate regional outage from the least preferred region. The reads from one secondary in
75
- this topology also encounter low latency in the preferred region.
76
-
77
- This topology, however, can't tolerate a regional outage from a preferred region without manual
78
- intervention, such as adding additional nodes to the least-preferred region. Furthermore, the reads
79
- from secondaries in a specific region is limited to a single node, and there is potential data loss
80
- during regional outage.
81
-
82
- 3-Region, 3 Node Replica Set / Shard (P - S - S)
83
- ``````````````````````````````````````````````````
84
- This topology is a typical multi-regional topology where high availability and data durability are
85
- more important than write latency performance. This topology can tolerate any 1 node failure, any
86
- 1 region outage, and is inexpensive.
87
-
88
- This topology, however, experiences higher latency because its majority write concern must replicate
89
- to a second region. Furthermore, a primary node failure shifts the primary out of the preferred region,
90
- and reads from secondaries always exist in a different region compared to the primary, and may exist
91
- outside of the preferred region.
92
-
93
- 3-Region, 5 Node Replica Set / Shard (PS - SS - S)
94
- ````````````````````````````````````````````````````
95
- This topology is a typical multi-regional topology for mission-critical applications that require
96
- in-region availability. This topology can tolerate any 2 nodes' failure, tolerate primary node failure
69
+ 3-Region, 3 Node Replica Set / Shard (Primary - Secondary - Secondary)
70
+ ``````````````````````````````````````````````````````````````````````
71
+ This topology is the standard multi-regional topology where high availability can be provided to tolerate a regional outage.
72
+ This topology can tolerate any 1 node failure, any 1 region outage, and is the least expensive multi-region topology.
73
+
74
+ If the application requires high durability and the app server code has the write majority option set
75
+ in the driver, this topology will experience higher latency for writes as they need to replicate to a second region.
76
+ Additionally, any reads off the secondary always exist in a different region than the preferred primary region
77
+ and will require a different app server architecture. Furthermore, any failover event will occur to a different region, and your
78
+ application architecture must adjust as a result.
79
+
80
+ 3-Region, 5 Node Replica Set / Shard (Primary Secondary - Secondary Secondary - Secondary)
81
+ ``````````````````````````````````````````````````````````````````````````````````````````
82
+ This topology is the preferred topology that balances high availability, performance, and cost across multiple regions.
83
+ This topology can tolerate any 2 nodes' failure, tolerate primary node failure
97
84
while keeping the new primary in the preferred region, and tolerate any 1 region outage.
98
85
99
- This topology, however, experiences higher latency because its majority write concern must replicate
100
- to a second region. Furthermore, reads from secondaries always exist in a different region compared
101
- to the primary, and may exist outside of the preferred region.
102
-
103
- 3-Region, 7 Node Replica Set / Shard (PSS - SSS - S)
104
- `````````````````````````````````````````````````````
105
- This topology is a non-typical multi-regional topology for mission-critical applications that require
106
- in-region availability. This topology can tolerate any 3 nodes' failure, tolerate primary node failure
107
- and a secondary node failure in the preferred region, and tolerate any 1 region outage.
108
-
109
- This topology, however, experiences higher latency because its majority write concern must replicate
110
- to a second region.
111
-
112
- 5-Region, 9 Node Replica Set / Shard (PS - SS - SS - SS - S)
113
- `````````````````````````````````````````````````````````````
114
- This topology is a non-typical multi-regional topology for the most availability-demanding applications
115
- that require in-region availability. This topology can tolerate any 4 nodes' failure, tolerate primary
116
- node failure while keeping the new primary in the preferred region, and tolerate any 2 region outages.
117
-
118
- This topology, however, experiences higher latency because its majority write concern must replicate
119
- to two additional regions (3 regions in total).
86
+ If the application requires high durability and the app server code has the write majority option set
87
+ in the driver, this topology will experience higher latency for writes as they need to replicate to a second region.
88
+ In order to accommodate a regional outage, the application tier must be configured to also fail over and shift work to
89
+ the elected primary.
120
90
121
91
Examples
122
92
--------
0 commit comments