Skip to main content

Python SDK for the Aqueduct prediction infrastructure

Project description

Aqueduct: Prediction Infrastructure for Data Scientists

Downloads Slack GitHub license PyPI version Tests

With Aqueduct, data scientists can instantaneously deploy machine learning models to the cloud, connect those models to data and business systems, and gain visibility into the performance of their prediction pipelines -- all from the comfort of a Python notebook.

The core abstraction in Aqueduct is a Workflow, which is a sequence of Artifacts (data) that are transformed by Operators (compute). The input Artifact(s) for a Workflow is typically loaded from a database, and the output Artifact(s) are typically persisted back to a database. Each Workflow can either be run on a fixed schedule or triggered on-demand.

To get started with Aqueduct:

  1. Ensure that you meet the basic requirements.
  2. Install the aqueduct server and UI by running:
    pip3 install aqueduct-ml
    
  3. Launch both the server and the UI by running:
    aqueduct start
    
  4. Get your API Key by running:
    aqueduct apikey
    

Once you have the Aqueduct server running, this 25-line code snippet is all you need to create your first prediction pipeline:

import aqueduct as aq
from aqueduct import op, metric
import pandas as pd
# Need to install torch and transformers
#!pip install torch transformers
from transformers import pipeline
import torch

client = aq.Client("YOUR_API_KEY", "localhost:8080")

# This function takes in a DataFrame with the text of user review of
# hotels and returns a DataFrame that has the sentiment of the review.
# This function users the `pipeline` interface from HuggingFace's 
# Transformers package. 
@op()
def sentiment_prediction(reviews):
    model = pipeline("sentiment-analysis")
    predicted_sentiment = model(list(reviews["review"]))
    return reviews.join(pd.DataFrame(predicted_sentiment))

# Load a connection to a database -- here, we use the `aqueduct_demo`
# database, for which you can find the documentation here:
# https://docs.aqueducthq.com/example-workflows/demo-data-warehouse
demo_db = client.integration("aqueduct_demo")

# Once we have a connection to a database, we can run a SQL query against it.
reviews_table = demo_db.sql("select * from hotel_reviews;")

# Next, we apply our annotated function to our data -- this tells Aqueduct 
# to create a workflow spec that applied `sentiment_prediction` to `reviews_table`.
sentiment_table = sentiment_prediction(reviews_table)

# When we call `.save()`, Aqueduct will take the data in `sentiment_table` and 
# write the results back to any database you specify -- in this case, back to the 
# `aqueduct_demo` DB.
sentiment_table.save(demo_db.config(table="sentiment_pred", update_mode="replace"))

# In Aqueduct, a metric is a numerical measurement of a some predictions. Here, 
# we calculate the average sentiment score returned by our machine learning 
# model, which is something we can track over time.


# In Aqueduct, a metric is a numerical measurement of a some predictions. Here, 
# we calculate the average sentiment score returned by our machine learning 
# model, which is something we can track over time.
@metric
def average_sentiment(reviews_with_sent):
    return (reviews_with_sent["label"] == "POSITIVE").mean()

avg_sent = average_sentiment(sentiment_table)

# Once we compute a metric, we can set upper and lower bounds on it -- if 
# the metric exceeds one of those bounds, an error will be raised.
avg_sent.bound(lower=0.5)

# And we're done! With a call to `publish_flow`, we've created a full workflow
# that calculates the sentiment of hotel reviews, creates a metric over those
# predictions, and sets a bound on that metric.
client.publish_flow(name="hotel_sentiment", artifacts=[sentiment_table, avg_sent])

Why Aqueduct?

The existing tools for deploying models are not designed with data scientists in mind -- they assume the user will casually build Docker containers, deploy Kubernetes clusters, and writes thousands of lines of YAML to deploy a single model. Data scientists are by and large not interested in doing that, and there are better uses for their skills.

Aqueduct is designed for data scientists, with three core design principles in mind:

  • Simplicity: Data scientists should be able to deploy models with tools they're comfortable with and without having to learn how to use complex, low-level infrastructure systems.
  • Connectedness: Data science and machine learning can have the greatest impact when everyone in the business has access, and data scientists shouldn't have to bend over backwards to make this happen.
  • Confidence: Having the whole organization benefit from your work means that data scientists should be able to sleep peacefully, knowing that things are working as expected -- and they'll be alerted as soon as that changes.

What's next?

Interested in learning more? Check out our documentation, where you'll find:

If you have questions or comments or would like to learn more about what we're building, please reach out, join our Slack channel, or start a conversation on GitHub. We'd love to hear from you!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aqueduct-sdk-0.0.3.tar.gz (53.8 kB view hashes)

Uploaded Source

Built Distribution

aqueduct_sdk-0.0.3-py3-none-any.whl (66.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page