Skip to content

Commit ba452f3

Browse files
committed
(DOCSP-21084) Support for pushed filters (#93)
* (DOCSP-21084) Support for pushed filters
1 parent 06157d4 commit ba452f3

File tree

2 files changed

+24
-6
lines changed

2 files changed

+24
-6
lines changed

source/includes/pushed-filters.rst

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
When using filters with DataFrames or Datasets, the
2+
underlying MongoDB Connector code constructs an :manual:`aggregation
3+
pipeline </core/aggregation-pipeline/>` to filter the data in
4+
MongoDB before sending it to Spark. This improves Spark performance
5+
by retrieving and processing only the data you need.
6+
7+
MongoDB Spark Connector turns the following filters into
8+
aggregation pipeline stages:
9+
10+
- And
11+
- EqualNullSafe
12+
- EqualTo
13+
- GreaterThan
14+
- GreaterThanOrEqual
15+
- In
16+
- IsNull
17+
- LessThan
18+
- LessThanOrEqual
19+
- Not
20+
- Or
21+
- StringContains
22+
- StringEndsWith
23+
- StringStartsWith

source/python/filters-and-sql.txt

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,7 @@
11
Filters
22
-------
33

4-
.. note::
5-
6-
When using filters with DataFrames or the Python API, the
7-
underlying Mongo Connector code constructs an :manual:`aggregation
8-
pipeline </core/aggregation-pipeline/>` to filter the data in
9-
MongoDB before sending it to Spark.
4+
.. include:: includes/pushed-filters.rst
105

116
Use ``filter()`` to read a subset of data from your MongoDB collection.
127

0 commit comments

Comments
 (0)