Skip to main content

Server-side processing for DataTables and Editor with MongoDB

Project description

mongo-datatables

PyPI version Downloads License: MIT codecov Tests Sponsor

Server-side processing for jQuery DataTables with MongoDB.

Translates DataTables Ajax requests into MongoDB aggregation pipelines, handling pagination, sorting, filtering, search, SearchPanes, SearchBuilder, and Editor (full CRUD).

Links

Installation

pip install mongo-datatables
# or
uv add mongo-datatables

Quick Start

from pymongo import MongoClient
from mongo_datatables import DataTables, DataField

db = MongoClient("mongodb://localhost:27017/")["mydb"]

data_fields = [
    DataField('title', 'string'),
    DataField('artist', 'string'),
    DataField('year', 'number'),
    DataField('genre', 'string'),
]

# args is the DataTables Ajax request body (a dict)
result = DataTables(db, 'albums', args, data_fields).get_rows()

db is any PyMongo Database object. args is the JSON body from a DataTables Ajax POST. get_rows() returns the standard server-side response:

{
    "draw": 1,
    "recordsTotal": 1000000,
    "recordsFiltered": 4821,
    "data": [...]
}

Framework Examples

Flask

@app.route('/api/data', methods=['POST'])
def data():
    args = request.get_json()
    dt = DataTables(db, 'albums', args, data_fields)
    return jsonify(dt.get_rows())

FastAPI

@app.post('/api/data')
async def data(request: Request):
    args = await request.json()
    dt = DataTables(db, 'albums', args, data_fields)
    return JSONResponse(dt.get_rows())

Django

class DataView(View):
    def post(self, request):
        args = json.loads(request.body)
        dt = DataTables(db, 'albums', args, data_fields)
        return JsonResponse(dt.get_rows())

Litestar

@post('/api/data')
async def data(request: Request) -> dict:
    args = await request.json()
    return DataTables(db, 'albums', args, data_fields).get_rows()

Quart

@app.route('/api/data', methods=['POST'])
async def data():
    args = await request.get_json()
    dt = DataTables(db, 'albums', args, data_fields)
    return jsonify(dt.get_rows())

DataField

DataField(name, data_type, alias=None) maps a MongoDB field to a DataTables column.

DataField('title', 'string')               # basic field
DataField('release_date', 'date')          # date comparison
DataField('track_count', 'number')         # numeric comparison
DataField('PublisherInfo.label', 'string', 'label')  # nested + alias
DataField('_id', 'objectid')              # string in response

Valid types:

Type Search behaviour Uses index? Operators
keyword Exact equality match Yes (regular index)
string Case-insensitive regex (substring) No
number Exact equality or numeric comparison Yes (regular index) > >= < <= =
date Date comparison (ISO YYYY-MM-DD) Yes (regular index) > >= < <= =
array Regex against array elements No
objectid Serialized as string in response
boolean, object, null Treated as string (regex) No

Use keyword for categorical/code fields (country codes, status values, tags) where exact matching is always intended and index performance matters. Use string for free-text fields where substring and partial matching is useful.

# country:US  →  {"country_code": "US"}  — uses index
DataField('country_code', 'keyword')
# name:york   →  regex, finds "New York", "Yorkshire"
DataField('name',         'string')
# year:>1990  →  {"year": {"$gt": 1990}}  — uses index
DataField('year',         'number')
# released:>=2020-01-01  — uses index
DataField('released',     'date')
# serialized as string in response
DataField('_id',          'objectid')
# nested field with UI alias
DataField('PublisherInfo.label', 'string', 'label')

The alias is the name DataTables uses for the column (columns[i][data]). Defaults to the last segment of the field path (PublisherInfo.labellabel).


Search

Search is where this library earns its keep. The global search box supports several modes:

Text index search (fast)

When a MongoDB text index exists, global search uses $text — fast even on multi-million-row collections:

db.albums.create_index([
    ("title", "text"),
    ("artist", "text"),
    ("genre", "text"),
])

Without a text index, the library falls back to per-column regex (much slower on large collections).

Phrase search

Wrap in quotes for exact phrase matching:

"Dark Side of the Moon"

Multi-word AND search

With search[smart]=true (DataTables default), each word must match at least one searchable column:

pink floyd 1973   →  all three terms must appear across the row

Colon syntax — field-specific search

Target a specific field without needing a separate input:

artist:Bowie                →  artist contains "Bowie" (regex)
artist:"David Bowie"        →  exact phrase in artist field
country_code:US             →  equals "US" (keyword, uses index)
year:1972                   →  equals 1972 (number, uses index)
year:>1990                  →  greater than
year:>=1990 year:<2000      →  combine conditions (ANDed)
release_date:>2020-01-01    →  date comparison

Column search with ranges

Per-column search supports pipe-delimited min|max for numbers and dates:

1990|2000          →  1990 ≤ year ≤ 2000
2020-01-01|2020-12-31

Regex mode

Set search[regex]=true to treat the search value as a raw MongoDB regex:

^Dark             →  starts with "Dark"
(Floyd|Bowie)     →  matches either

Case sensitivity

Case-insensitive by default. Pass search[caseInsensitive]=false for case-sensitive matching. Per-column override via columns[i][search][caseInsensitive].


SearchPanes

No server-side configuration needed — call get_searchpanes_options() to populate panes on page load:

@app.route('/searchpanes', methods=['POST'])
def searchpanes():
    dt = DataTables(db, 'albums', request.get_json(), data_fields)
    return jsonify(dt.get_searchpanes_options())

SearchBuilder

Full server-side support with nested AND/OR criteria trees. Works automatically — no extra configuration needed.


Sorting

Multi-column sorting, ColReorder (order[i][name] name-based ordering), and orderData column redirect are all supported.


Custom Filters

Scope all queries to a subset of the collection by passing extra filter criteria as keyword arguments:

DataTables(
    db, 'albums', args, data_fields,
    status='active', label='Merge Records',
)

Editor

Full CRUD support for DataTables Editor.

from mongo_datatables import Editor

@app.route('/editor', methods=['POST'])
def editor():
    data = request.get_json()
    result = Editor(
        db, 'albums', data,
        doc_id=request.args.get('id'),
        data_fields=data_fields,
    ).process()
    return jsonify(result)

Editor also handles action=search for autocomplete and tags field types:

@app.route('/editor/search', methods=['POST'])
def editor_search():
    data = request.get_json()
    editor = Editor(db, 'albums', data, data_fields=data_fields)
    return jsonify(editor.search())

Optional Editor parameters:

Parameter Description
validators dict mapping field names to callable(value) -> str|None
hooks pre_create, pre_edit, pre_remove callables — return falsy to cancel
options dict or zero-arg callable for select/radio/checkbox field options
dependent_handlers dict mapping field names to callables for dependent field Ajax
file_fields + storage_adapter file upload support (subclass StorageAdapter)
row_class, row_data, row_attr per-row metadata (static value or callable)

Performance & Indexes

For large collections, indexes are critical — the library uses aggregation pipelines on every request.

Text index

db.albums.create_index([
    ("title", "text"),
    ("artist", "text"),
    ("genre", "text"),
])

With a text index, global search runs in ~100–300ms on multi-million-row collections. Without one, regex fallback can take 5–10+ seconds.

MongoDB allows only one text index per collection, but it can cover multiple fields.

To force regex search even when a text index exists (for substring matching):

DataTables(db, 'albums', args, data_fields, use_text_index=False)

Regular indexes

Create indexes for fields used in sorting, column search, or custom filters:

db.albums.create_index("year")
db.albums.create_index("artist")
db.albums.create_index([("artist", 1), ("year", -1)])  # compound

Advanced

pipeline_stages — inject aggregation stages ($lookup, $addFields, $unwind) before the $match, useful for computed or joined fields.

allow_disk_use=True — pass allowDiskUse to aggregation pipelines when complex filters exceed MongoDB's 100 MB in-memory limit.

get_export_data() — returns all matching rows without pagination for CSV/Excel export.

row_id, row_class, row_data, row_attr — per-row DT_Row* metadata, accepts a static value or a callable receiving the raw document.

See the full documentation for details.


Development

Run tests:

uv run pytest tests/

Run with coverage:

uv run pytest --cov=mongo_datatables tests/ \
    --cov-report=term --cov-report=html

License

Released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mongo_datatables-2.1.0.tar.gz (49.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mongo_datatables-2.1.0-py3-none-any.whl (60.9 kB view details)

Uploaded Python 3

File details

Details for the file mongo_datatables-2.1.0.tar.gz.

File metadata

  • Download URL: mongo_datatables-2.1.0.tar.gz
  • Upload date:
  • Size: 49.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mongo_datatables-2.1.0.tar.gz
Algorithm Hash digest
SHA256 50dfae6f2a90802716bfb09d39eb2b66187edabbc00be182e53482ab76e48316
MD5 f530c2975de6b8f4807c1eba20bbb3a0
BLAKE2b-256 fe3abb15e78401fdd293f5be1f843e2cd083787d06b2ca62195c73badfc5e226

See more details on using hashes here.

File details

Details for the file mongo_datatables-2.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mongo_datatables-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d75ae53f0c9c857370bc3f3f0b4ba29c5f0e81f3cb4968e98d96539301ca736a
MD5 eae817488b3fb17d2c30f58aa6a05092
BLAKE2b-256 fb751bb11890f79c92ee4156da0b81a7e7dcda390ac475f8a9aeb3cd27fa5bcf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page