Skip to main content

The control center for ML in the cloud

Project description

Run LLMs and ML on any cloud infrastructure

📢 Slack  |  🗺️ Roadmap  |  🐞 Report a bug  |  ✍️ Blog

Start Sandbox Downloads Slack GitHub license PyPI version Tests

Aqueduct is an MLOps framework that allows you to define and deploy machine learning and LLM workloads on any cloud infrastructure. Check out our quickstart guide! →

Aqueduct is an open-source MLOps framework that allows you to write code in vanilla Python, run that code on any cloud infrastructure you'd like to use, and gain visibility into the execution and performance of your models and predictions. See what infrastructure Aqueduct works with. →

Here's how you can get started:

pip3 install aqueduct-ml
aqueduct start

How it works

Aqueduct's Python native API allows you to define ML tasks in regular Python code. You can connect Aqueduct to your existing cloud infrastructure (docs), and Aqueduct will seamlessly move your code from your laptop to the cloud or between different cloud infrastructure layers.

For example, we can define a pipeline that trains a model on Kubernetes using a GPU and validates that model in AWS Lambda in a few lines of Python:

# Use an existing LLM.
vicuna = aq.llm_op('vicuna_7b', engine='eks-us-east-2')
features = vicuna(
    raw_logs,
    { 
        "prompt": 
        "Turn this log entry into a CSV: {text}" 
    }
)

# Or write a custom op on your favorite infrastructure!
@op(
  engine='kubernetes',
  # Get a GPU.
  resources={'gpu_resource_name': 'nvidia.com/gpu'}
)
def train(featurized_logs):
  return model.train(features) # Train your model.

train(features)

Once you publish this workflow to Aqueduct, you can see it on the UI:

image

To see how to build your first workflow, check out our quickstart guide! →

Why Aqueduct?

MLOps has become a tangled mess of siloed infrastructure. Most teams need to set up and operate many different cloud infrastructure tools to run ML effectively, but these tools have disparate APIs and interoperate poorly.

Aqueduct provides a single interface to running machine learning tasks on your existing cloud infrastructure — Kubernetes, Spark, Lambda, etc. From the same Python API, you can run code across any or all of these systems seamlessly and gain visibility into how your code is performing.

  • Python-native pipeline API: Aqueduct’s API allows you define your workflows in vanilla Python, so you can get code into production quickly and effectively. No more DSLs or YAML configs to worry about.
  • Integrated with your infrastructure: Workflows defined in Aqueduct can run on any cloud infrastructure you use, like Kubernetes, Spark, Airflow, or AWS Lambda. You can get all the benefits of Aqueduct without having to rip-and-replace your existing tooling.
  • Centralized visibility into code, data, & metadata: Once your workflows are in production, you need to know what’s running, whether it’s working, and when it breaks. Aqueduct gives you visibility into what code, data, metrics, and metadata are generated by each workflow run, so you can have confidence that your pipelines work as expected — and know immediately when they don’t.
  • Runs securely in your cloud: Aqueduct is fully open-source and runs in any Unix environment. It runs entirely in your cloud and on your infrastructure, so you can be confident that your data and code are secure.

Overview & Examples

The core abstraction in Aqueduct is a Workflow, which is a sequence of Artifacts (data) that are transformed by Operators (compute). The input Artifact(s) for a Workflow is typically loaded from a database, and the output Artifact(s) are typically persisted back to a database. Each Workflow can either be run on a fixed schedule or triggered on-demand.

To see Aqueduct in action on some real-world machine learning workflows, check out some of our examples:

What's next?

Check out our documentation, where you'll find:

If you have questions or comments or would like to learn more about what we're building, please reach out, join our Slack channel, or start a conversation on GitHub. We'd love to hear from you!

If you're interested in contributing, please check out our roadmap and join the development channel in our community Slack.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aqueduct-ml-0.3.6.tar.gz (66.1 kB view details)

Uploaded Source

Built Distribution

aqueduct_ml-0.3.6-py3-none-any.whl (100.7 kB view details)

Uploaded Python 3

File details

Details for the file aqueduct-ml-0.3.6.tar.gz.

File metadata

  • Download URL: aqueduct-ml-0.3.6.tar.gz
  • Upload date:
  • Size: 66.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for aqueduct-ml-0.3.6.tar.gz
Algorithm Hash digest
SHA256 8834742c98e8ce3098a85990416c4d60cdd9562e5544b2042223f08a2028834c
MD5 45c8c410245530bb4029a40060ad5de4
BLAKE2b-256 b7e92cd579ba3c7f588291f9b87c725747260428ca1f2f375a971ed5b650699a

See more details on using hashes here.

File details

Details for the file aqueduct_ml-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: aqueduct_ml-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 100.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for aqueduct_ml-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 af63b5f333a8b3aa9423560214ec9176e1504df131e670e5b7f61e82257e2d45
MD5 a58364a934db4d65b4c5e5b2ad895bbb
BLAKE2b-256 20a5761f3c59282999536a6cd60152f3d2f7d42f31d9c7301741279da31ccfc2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page