Skip to content

Commit 57218ea

Browse files
authored
DOCSP-30436: Added ignoreNullValues (#162)
1 parent 445e8a7 commit 57218ea

File tree

3 files changed

+62
-48
lines changed

3 files changed

+62
-48
lines changed

source/configuration/write.txt

Lines changed: 51 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -31,40 +31,57 @@ The following options for writing to MongoDB are available:
3131
* - Property name
3232
- Description
3333

34-
* - ``mongoClientFactory``
35-
- | MongoClientFactory configuration key.
36-
| You can specify a custom implementation which must implement the
37-
``com.mongodb.spark.sql.connector.connection.MongoClientFactory``
38-
interface.
39-
|
40-
| **Default:** ``com.mongodb.spark.sql.connector.connection.DefaultMongoClientFactory``
34+
* - ``collection``
35+
- | **Required.**
36+
| The collection name configuration.
4137

4238
* - ``connection.uri``
4339
- | **Required.**
4440
| The connection string configuration key.
4541
|
4642
| **Default:** ``mongodb://localhost:27017/``
4743

44+
* - ``convertJson``
45+
- | When ``true``, the connector parses the string and converts extended JSON
46+
into BSON.
47+
|
48+
| **Default:** ``false``
49+
4850
* - ``database``
4951
- | **Required.**
5052
| The database name configuration.
5153

52-
* - ``collection``
53-
- | **Required.**
54-
| The collection name configuration.
54+
* - ``idFieldList``
55+
- | Field or list of fields by which to split the collection data. To
56+
specify more than one field, separate them using a comma as shown
57+
in the following example:
5558

59+
.. code-block:: none
60+
:copyable: false
61+
62+
"fieldName1,fieldName2"
63+
64+
| **Default:** ``_id``
65+
66+
* - ``ignoreNullValues``
67+
- | When ``true``, the connector ignores any ``null`` values when writing,
68+
including ``null`` values in arrays and nested documents.
69+
|
70+
| **Default:** ``false``
5671

5772
* - ``maxBatchSize``
5873
- | Specifies the maximum number of operations to batch in bulk
5974
operations.
60-
6175
|
6276
| **Default:** ``512``
6377

64-
* - ``ordered``
65-
- | Specifies whether to perform ordered bulk operations.
78+
* - ``mongoClientFactory``
79+
- | MongoClientFactory configuration key.
80+
| You can specify a custom implementation which must implement the
81+
``com.mongodb.spark.sql.connector.connection.MongoClientFactory``
82+
interface.
6683
|
67-
| **Default:** ``true``
84+
| **Default:** ``com.mongodb.spark.sql.connector.connection.DefaultMongoClientFactory``
6885

6986
* - ``operationType``
7087
- | Specifies the type of write operation to perform. You can set
@@ -83,26 +100,19 @@ The following options for writing to MongoDB are available:
83100
|
84101
| **Default:** ``replace``
85102

86-
* - ``idFieldList``
87-
- | Field or list of fields by which to split the collection data. To
88-
specify more than one field, separate them using a comma as shown
89-
in the following example:
90-
91-
.. code-block:: none
92-
:copyable: false
93-
94-
"fieldName1,fieldName2"
95-
96-
| **Default:** ``_id``
103+
* - ``ordered``
104+
- | Specifies whether to perform ordered bulk operations.
105+
|
106+
| **Default:** ``true``
97107

98-
* - ``writeConcern.w``
99-
- | Specifies ``w``, a write-concern option to request acknowledgment
100-
that the write operation has propogated to a specified number of
101-
MongoDB instances. For a list
102-
of allowed values for this option, see :manual:`WriteConcern
103-
</reference/write-concern/#w-option>` in the MongoDB manual.
108+
* - ``upsertDocument``
109+
- | When ``true``, replace and update operations will insert the data
110+
if no match exists.
104111
|
105-
| **Default:** ``1``
112+
| For time series collections, you must set ``upsertDocument`` to
113+
``false``.
114+
|
115+
| **Default:** ``true``
106116

107117
* - ``writeConcern.journal``
108118
- | Specifies ``j``, a write-concern option to enable request for
@@ -114,6 +124,15 @@ The following options for writing to MongoDB are available:
114124
guide on the
115125
:manual:`WriteConcern j option </reference/write-concern/#j-option>`.
116126

127+
* - ``writeConcern.w``
128+
- | Specifies ``w``, a write-concern option to request acknowledgment
129+
that the write operation has propogated to a specified number of
130+
MongoDB nodes. For a list
131+
of allowed values for this option, see :manual:`WriteConcern
132+
</reference/write-concern/#w-option>` in the MongoDB manual.
133+
|
134+
| **Default:** ``1``
135+
117136
* - ``writeConcern.wTimeoutMS``
118137
- | Specifies ``wTimeoutMS``, a write-concern option to return an error
119138
when a write operation exceeds the number of milliseconds. If you
@@ -123,21 +142,6 @@ The following options for writing to MongoDB are available:
123142
guide on the
124143
:manual:`WriteConcern wtimeout option </reference/write-concern/#wtimeout>`.
125144

126-
* - ``upsertDocument``
127-
- | When ``true``, replace and update operations will insert the data
128-
if no match exists.
129-
|
130-
| For time series collections, you must set ``upsertDocument`` to
131-
``false``.
132-
|
133-
| **Default:** ``true``
134-
135-
* - ``convertJson``
136-
- | When ``true``, the connector parses the string and converts extended JSON
137-
into BSON.
138-
|
139-
| **Default:** ``false``
140-
141145
.. _configure-output-uri:
142146

143147
``connection.uri`` Configuration Setting

source/release-notes.txt

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,13 @@
22
Release Notes
33
=============
44

5+
MongoDB Connector for Spark 10.2.0
6+
----------------------------------
7+
8+
- Added the ``ignoreNullValues`` write configuration property, which enables you
9+
to control whether the connector ignores null values. In previous versions,
10+
the connector always wrote ``null`` values to MongoDB.
11+
512
MongoDB Connector for Spark 10.1.1
613
----------------------------------
714

source/write-to-mongodb.txt

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,4 +43,7 @@ Write to MongoDB
4343
.. important::
4444

4545
If your write operation includes a field with a ``null`` value,
46-
the connector writes the field name and ``null`` value to MongoDB.
46+
the connector writes the field name and ``null`` value to MongoDB. You can
47+
change this behavior by setting the write configuration property
48+
``ignoreNullValues``. For more information about setting the connector's
49+
write behavior, see :ref:`Write Configuration Options <spark-write-conf>`.

0 commit comments

Comments
 (0)