Skip to main content

MongoDB aggregation pipelines made easy. Joins, grouping, counting and much more...

Project description

Overview

Monggregate is a library that aims at simplifying usage of MongoDB aggregation pipelines in python. It is based on MongoDB official python driver, pymongo and on pydantic.

Features

  • provides an OOP interface to the aggregation pipeline.
  • allows you to focus on your requirements rather than MongoDB syntax
  • integrates all the MongoDB documentation and allows you to quickly refer to it without having to navigate to the website.
  • enables autocompletion on the various MongoDB features.
  • offers a pandas-style way to chain operations on data.

Requirements

This package requires python > 3.10, pydantic > 1.8.0

Installation

The repo is now available on PyPI:

pip install monggregate

Usage

The below examples reference the MongoDB sample_mflix database

Basic Pipeline usage

from dotenv import load_dotenv
import pymongo
from monggregate import Pipeline, S

# Load config from a .env file:
load_dotenv(verbose=True)
MONGODB_URI = os.environ["MONGODB_URI"]

# Connect to your MongoDB cluster:
client = pymongo.MongoClient(MONGODB_URI)

# Get a reference to the "sample_mflix" database:
db = client["sample_mflix"]

# Creating the pipeline
pipeline = Pipeline()

# The below pipeline will return the most recent movie with the title "A Star is Born"
pipeline.match(
    title="A Star is Born"
).sort(
    value="year"
).limit(
    value=1
)

# Executing the pipeline
results = db["movies"].aggregate(pipeline.export())

print(results)

More advanced usage, with MongoDB operators

from dotenv import load_dotenv
import pymongo
from monggregate import Pipeline, S

# Load config from a .env file:
load_dotenv(verbose=True)
MONGODB_URI = os.environ["MONGODB_URI"]

# Connect to your MongoDB cluster:
client = pymongo.MongoClient(MONGODB_URI)

# Get a reference to the "sample_mflix" database:
db = client["sample_mflix"]


# Creating the pipeline
pipeline = Pipeline()
pipeline.match(
    year=S.type_("number") # Filtering out documents where the year field is not a number
).group(
    by="year",
    query = {
        "movie_count":S.sum(1), # Aggregating the movies per year
        "movie_titles":S.push("$title")
    }
).sort(
    by="_id",
    descending=True
).limit(10)

# Executing the pipeline
results = db["movies"].aggregate(pipeline.export())

print(results)

Advanced usage with Expressions

from monggregate import Pipeline, S, Expression

pipeline = Pipeline()
pipeline.lookup(
    right="comments",
    right_on="_id",
    left_on="movie_id",
    name="comments
).add_fields(
    comment_count=Expression.field("related_comments").size()
).match(
    comment_count=S.gte(2)
)

Motivation

The main driver for building this package was how unconvenient it was for me to build aggregation pipelines using pymongo or any other tool.

With pymongo, which is the official MongoDB driver for python, there is no direct support for aggregation pipelines.

pymongo exposes an aggregate method but the pipeline inside is just a list of complex dictionaries that quickly become quite long, nested and overwhelming.

At the end, it is barely readable for the one who built the pipeline. Let alone other developers. Besides, during the development process, it is often necessary to refer to the online documentation multiple times. Thus, the package aims at integrating the online documentation through in the docstrings of the various classes and modules of the package. Basically, the package mirrors every* stage and operator available on MongoDB.

*Actually, it only covers a subset of the stages and operators available. Please come help me to increase the coverage.

Roadmap

As of now, the package covers around 40% of the available stages and 25% of the available operators. I would argue, that the most important stages and operators are probably covered but this is subjective. The goal is to quickly reach 100% of both stages and operators. The source code integrates most of the online MongoDB documentation. If the online documentation evolves, it will need to be updated here as well. The current documentation is not consistent throughout the package it will need to be standardized later on. Some minor refactoring tasks are required also.

There are already a couple issue, that I noted myself for the next tasks that are going to be tackled.

Feel free to open an issue, if you found a bug or to propose enhancements. Feel free to do a PR, to propose new interfaces for stages that have not been dealt with yet.

Going further

  • Check out this GitHub repo for more examples.
  • Check out this tutorial on Medium. (It's not under the paywall)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monggregate-0.16.1.tar.gz (101.3 kB view details)

Uploaded Source

Built Distribution

monggregate-0.16.1-py3-none-any.whl (151.8 kB view details)

Uploaded Python 3

File details

Details for the file monggregate-0.16.1.tar.gz.

File metadata

  • Download URL: monggregate-0.16.1.tar.gz
  • Upload date:
  • Size: 101.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for monggregate-0.16.1.tar.gz
Algorithm Hash digest
SHA256 70a647256509e5b76f9237d6ac68fad20621078f0f9d9605baf1be560eb48959
MD5 4607c9d438f0ba895647b8fa90a4dcb1
BLAKE2b-256 c5c21a7f24c62e7ee4ba66c7751ca85aad630abcecc187cb4381e58a757f5e3f

See more details on using hashes here.

File details

Details for the file monggregate-0.16.1-py3-none-any.whl.

File metadata

  • Download URL: monggregate-0.16.1-py3-none-any.whl
  • Upload date:
  • Size: 151.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for monggregate-0.16.1-py3-none-any.whl
Algorithm Hash digest
SHA256 aa871fd29798efe682e2f8ad87c39cc5f406a1a36e4e7452a999ee3fa1e9de08
MD5 d7b521b9ac26b6581d0fa3d07fa9f266
BLAKE2b-256 f7775fbfc0eecb87397ee24e92825be18e672896488418c270378ce79937546e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page