Skip to main content

MongoDB loader for mkpipe.

Project description

mkpipe-loader-mongodb

MongoDB loader plugin for MkPipe. Writes Spark DataFrames into MongoDB collections using the official MongoDB Spark Connector.

Documentation

For more detailed documentation, please visit the GitHub repository.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


Connection Configuration

connections:
  mongo_target:
    variant: mongodb
    mongo_uri: 'mongodb://user:password@host:27017/mydb?authSource=admin'
    database: mydb

Alternatively, use individual parameters (URI is constructed automatically):

connections:
  mongo_target:
    variant: mongodb
    host: localhost
    port: 27017
    database: mydb
    user: myuser
    password: mypassword

Table Configuration

pipelines:
  - name: pg_to_mongo
    source: pg_source
    destination: mongo_target
    tables:
      - name: public.events
        target_name: stg_events
        replication_method: full

Write Parallelism

By default Spark writes to MongoDB using however many partitions the DataFrame currently has. You can control write parallelism with write_partitions, which calls coalesce before the write to reduce the number of open connections to MongoDB:

      - name: public.events
        target_name: stg_events
        replication_method: full
        write_partitions: 4     # coalesce DataFrame to N partitions before writing

When to use write_partitions

  • Reduce connections: MongoDB has a connection limit per node. If Spark has many executors, each partition opens its own connection. Lowering write_partitions reduces connection count.
  • Increase throughput: A small number of large batches is generally faster than many small batches. A value of 4–8 is a good starting point.
  • coalesce vs repartition: coalesce avoids a shuffle (preferred for write). If the source has very few partitions and you want to increase them, use repartition — but that requires a code-level change, not a YAML setting.

Performance Notes

  • Write speed is mostly limited by MongoDB's write capacity and network, not Spark.
  • write_partitions is most effective when reducing an already-large partition count.
  • For append-mode incremental loads the default partition count is usually fine.

All Table Parameters

Parameter Type Default Description
name string required Source table/collection name
target_name string required MongoDB destination collection name
replication_method full / incremental full Replication strategy
write_partitions int Coalesce DataFrame to N partitions before writing
batchsize int 10000 Records per write batch
dedup_columns list Columns used for mkpipe_id hash deduplication
tags list [] Tags for selective pipeline execution
pass_on_error bool false Skip table on error instead of failing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mkpipe_loader_mongodb-0.5.0.tar.gz (11.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mkpipe_loader_mongodb-0.5.0-py3-none-any.whl (11.5 kB view details)

Uploaded Python 3

File details

Details for the file mkpipe_loader_mongodb-0.5.0.tar.gz.

File metadata

  • Download URL: mkpipe_loader_mongodb-0.5.0.tar.gz
  • Upload date:
  • Size: 11.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for mkpipe_loader_mongodb-0.5.0.tar.gz
Algorithm Hash digest
SHA256 77c94122d69f0225d8c244d6ad626ba76ce0ec9da74576ffee1983a099db0662
MD5 2bd253600bf7776379e224ed74c84bc3
BLAKE2b-256 7023a96404421d6e6a4e3f6f1e316db6cbab65a6d515d2083b8d82dde2d550d7

See more details on using hashes here.

File details

Details for the file mkpipe_loader_mongodb-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mkpipe_loader_mongodb-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 038e880b593ca1229af08ea08e7410dc044dc4e00657fb7f67d2fc9dc8302f4b
MD5 be11a3460b7ecb3096ca8019e1b97951
BLAKE2b-256 6cd2161c9dc2022942dcd1fb2400cd2da3ed0dc966b61174c3ed8fa24dad94ff

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page