Skip to content

Commit 410ccd6

Browse files
author
Ed Costello
committed
DOCS-120 first draft data center awareness
1 parent 04d3895 commit 410ccd6

File tree

1 file changed

+126
-0
lines changed

1 file changed

+126
-0
lines changed

draft/multi-data-center-awareness.txt

Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
.. default-domain:: mongodb
2+
3+
==================================
4+
Data Center Awareness with MongoDB
5+
==================================
6+
7+
Data Center awareness and Multi Data Center awareness are attributes of
8+
distributed systems which take into account needs, requirements and
9+
features within the data center and across data centers. MongoDB has
10+
several features which can take advantage of features within a single
11+
data center or across multiple data centers.
12+
13+
MongoDB supports data center awareness through:
14+
15+
* Driver level :ref:`Read Preferences <read-preference>` and :ref:`Write Concerns <replica-set-write-concern>`.
16+
* :ref:`Replica Set Tags <replica-set-configuration-tag-sets>`
17+
* :ref:`Tag Aware Sharding <tag-aware-sharding>`
18+
19+
.. seealso::
20+
21+
A review of :doc:`/core/replication` and :doc:`/core/sharding`
22+
will help you to understand MongoDB's data center awareness
23+
features.
24+
25+
Data Center Awareness Through Read Preferences and Write Concerns
26+
-----------------------------------------------------------------
27+
28+
Read and Write Preferences are driver features
29+
which control the conditions under which a driver reads or writes to a
30+
MongoDB database.
31+
You should verify that the specific driver you are using supports these
32+
features before utilizing them for data center awareness.
33+
34+
Read preferences control whether your application reads from the primary
35+
or a secondary or the *nearest* MongoDB instance in the current topology
36+
of the network.
37+
38+
Write concerns control how durable the replication of your data is,
39+
from **fire and forget** to insurance that at least one replica set member
40+
has the data through assurance that a majority or all members of a
41+
replica set have copies of the data.
42+
The MongoDB write concern is not comparable to a transaction, it controls
43+
how long your client application will wait for confirmation that
44+
an update has propagated to however many replica set members you require.
45+
46+
Read Preferences and Write Concerns are documented in
47+
:doc:`/applications/replication`.
48+
49+
In the context of data center awareness, read preferences can be used to
50+
control whether a client reads from a primary, a secondary, the nearest
51+
replica set member, or a custom preference composed of
52+
:ref:`replica set tags <replica-set-configuration-tag>`.
53+
54+
In parallel, you can set the write concern on a connection to ensure
55+
that data has been written to one or more members of the replica set, to
56+
a majority of members of the replica set, or to a custom write concern
57+
composed of replica set tags and a quantity of tagged members which must
58+
acknowledge the success of the write.
59+
60+
While write concerns can be set on a per-connection basis, you can also
61+
define a default write concern for a replica set by updating the
62+
:data:`~local.system.replset.settings.getLastErrorDefaults`
63+
setting of the replica set.
64+
65+
See :ref:`replica-set-configuration-tag-sets` for sample configurations
66+
of replica sets with tags.
67+
68+
Within a data center you can use a combination of read preferences and
69+
write concerns to manage where information is written to and where it is
70+
read from.
71+
72+
You could direct all read activity from a reporting tool to secondary
73+
members of a replica set, lessening the workload on your primary server,
74+
by using a read preference of :readmode:`secondaryPreferred`.
75+
76+
If your replica set members are distributed across multiple data centers
77+
you can utilize :readmode:`nearest` to direct reads to the closest
78+
replica set member, which may be a primary or secondary member of the
79+
replica set.
80+
81+
Write activities always start by writing to the primary member of the
82+
replica set. The :term:`write concern` affects how long your client will
83+
wait for confirmation that the data has been written to additional
84+
replica set members.
85+
86+
As of the :ref:`Fall 2012 driver update <driver-write-concern-change>`,
87+
officially supported MongoDB drivers will wait for confirmation that
88+
updates have been committed on the primary.
89+
You can specify to wait for additional members to acknowledge a write,
90+
by specifying a number or ``w:majority`` on your connection, or by
91+
changing the :data:`~local.system.replset.settings.getLastErrorDefaults`
92+
setting of the replica set.
93+
94+
Replica set tags can be combined with write operations to wait for
95+
confirmation that an update has been committed to members of the replica
96+
set matching the specified tags.
97+
98+
For example, you could tag your replica set members with tags identifying
99+
the rack they occupy in a datacenter, or the type of storage media
100+
(spinning disk, solid state drive, etc.).
101+
102+
.. TODO:: several examples TK.
103+
104+
.. _tag-aware-sharding::
105+
106+
Multi Data Center Awareness Through Tag Aware Sharding
107+
------------------------------------------------------
108+
109+
Shard tagging controls data location, and is complementary to but separate
110+
from replica set tagging. Where replica set tags affect read and write
111+
operations within a given replica set, shard tags determine which
112+
replica sets contain tagged ranges of data.
113+
114+
Given a shard key with good cardinality, you can assign tags to ranges
115+
of the shard key. You then assign specific ranges to specific shards.
116+
117+
For example, given a collection with documents containing some sort of
118+
country and national sub-division (state, province) and a shard key composed
119+
of those fields plus some unique identifier (perhaps the BSON ObjectId of
120+
the document), you could assign tag ranges to the shard keys based on
121+
country and sub-division and then assign those chunks to shards whose
122+
primaries are geographically closer to those geographic locations.
123+
124+
.. TODO:: examples
125+
126+

0 commit comments

Comments
 (0)