Skip to main content

🔮 Super-power your database with AI 🔮

Project description

Deploy, train and operate AI with your datastore!

slack Coverage Package version Supported Python versions launch binder launch binder

How To | Installation | Quickstart | Get Help & Community | Contributing | Feedback | License | Join Us


SuperDuperDB allows you to easily integrate and manage any AI models and APIs with your datastore: from LLM based Q&A and vector search, image generation, segmentation, time series forecasting, anomaly detection, classification, recommendation, personalisation etc. to highly custom machine learning models and use-cases.

A single scalable deployment of all your AI models which is automatically kept up-to-date as new data is handled automatically and immediately.

No data duplication, no pipelines, no duplicate infrastructure — just Python!

Supported Data Stores: AI Frameworks, Models and APIs:
- MongoDB
- MongoDB Atlas
- S3
- Coming soon: PostgreSQL
MySQL, DuckDB, SQLLite
BigQuery, Snowflake
- PyTorch
- HuggingFace
- OpenAI
- Scikit-Learn
- Llama 2
- CLIP
- Coming soon: TensorFlow


Introduction

What can you do with SuperDuperDB?

  • Deploy all your AI models to automatically compute outputs (inference) in your datastore in a single environment with simple Python commands.
  • Train models on your data in your datastore simply by querying without additional ingestion and pre-processing.
  • Integrate AI APIs (such as OpenAI) to work together with other models on your data effortlessly.
  • Search your data with vector-search, including model management and serving.

Why choose SuperDuperDB?

  • Avoid data duplication, pipelines and duplicate infrastructure with a single scalable deployment.
  • Deployment always up-to-date as new data is handled automatically and immediately.
  • A simple and familiar Python interface that can handle even the most complex AI use-cases.

Who is SuperDuperDB for?

  • Python developers using datastores (databases/ lakes/ warehouses) who want to build AI into their applications easily.
  • Data scientists & ML engineers who want to develop AI models using their favourite tools, with minimum infrastructural overhead.
  • Infrastructure engineers who want a single scalable setup that supports both local, on-prem and cloud deployment.

SuperDuperDB transforms your datastore into:

  • An end-to-end live AI deployment which includes a model repository and registry, model training and computation of outputs/ inference
  • A feature store in which the model outputs are stored alongside the inputs in any data format.
  • A fully functional vector database to easily generate vector embeddings of your data with your favorite models and APIs and connect them with your datastore (and/ or) vector database.

How To

The following are examples of how to use SuperDuperDB with Python (find all how-tos and examples in the docs here):

import pymongo
from sklearn.svm import SVC

from superduperdb import superduper

# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)

# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())

# Add the model into the database
db.add(model)

# Predict on the selected data.
model.predict(X='input_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))

import pymongo
from sklearn.svm import SVC

from superduperdb import superduper

# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)

# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())

# Predict on the selected data.
model.predict(X='input_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))

# First a "Listener" makes sure vectors stay up-to-date
indexing_listener = Listener(model=OpenAIEmbedding(), key='text', select=collection.find())

# This "Listener" is linked with a "VectorIndex"
db.add(VectorIndex('my-index', indexing_listener=indexing_listener))

# The "VectorIndex" may be used to search data. Items to be searched against are passed
# to the registered model and vectorized. No additional app layer is required.
# By default, SuperDuperDB uses LanceDB for vector comparison operations
db.execute(collection.like({'text': 'clothing item'}, 'my-index').find({'brand': 'Nike'}))

# Create a ``VectorIndex`` instance with indexing listener as OpenAIEmbedding and add it to the database.
db.add(
    VectorIndex(
        identifier='my-index',
        indexing_listener=Listener(
            model=OpenAIEmbedding(identifier='text-embedding-ada-002'),
            key='abstract',
            select=Collection(name='wikipedia').find(),
        ),
    )
)
# The above also executes the embedding model (openai) with the select query on the key.

# Now we can use the vector-index to search via meaning through the wikipedia abstracts
cur = db.execute(
    Collection(name='wikipedia')
        .like({'abstract': 'philosophers'}, n=10, vector_index='my-index')
)

model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.float16,
    device_map="auto",
)

model = Pipeline(
    identifier='my-sentiment-analysis',
    task='text-generation',
    preprocess=tokenizer,
    object=pipeline,
    torch_dtype=torch.float16,
    device_map="auto",
)

# You can easily predict on your collection documents.
model.predict(
    X=Collection(name='test_documents').find(),
    db=db,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200
)

model.predict(
    X='input_col',
    db=db,
    select=coll.find().featurize({'X': '<upstream-model-id>'}),  # already registered upstream model-id
    listen=True,
)

Installation

1. Install SuperDuperDB via pip (~1 minute)

pip install superduperdb

2. Datastore installation (for MongoDB) (~10-15 minutes):

  • You already have MongoDB installed? Let's go!
  • You need to install MongoDB? See the docs here.

3. Try one of our example use cases/notebooks found here (~as many minutes you enjoy)!


Quickstart

Try SuperDuperDB on Binder:

Binder

This will set up a playground environment that includes:

  • an installation of SuperDuperDB
  • an installation of a MongoDB instance containing image data and torch models

Have fun!

Community & Getting Help

If you have any problems, questions, comments or ideas:

Contributing

There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:

Please see our Contributing Guide for details.

Feedback

Help us to improve SuperDuperDB by providing your valuable feedback here!

License

SuperDuperDB is open-source and intended to be a community effort, and it won't be possible without your support and enthusiasm. It is distributed under the terms of the Apache 2.0 license. Any contribution made to this project will be subject to the same provisions.

Join Us

We are looking for nice people who are invested in the problem we are trying to solve to join us full-time. Find roles that we are trying to fill here!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

superduperdb-0.0.5.tar.gz (96.2 kB view hashes)

Uploaded Source

Built Distribution

superduperdb-0.0.5-py3-none-any.whl (118.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page