Skip to main content

Covalent Blueprints: a toolkit for creating pre-packaged, reusable Covalent projects.

Project description

Covalent Blueprints Banner

Plug-and-play Covalent workflows and service deployments.

Covalent Blueprints are pre-configured applications for Covalent. Each blueprint is runnable both on its own and as a component in another workflow. See the catalogue below for a list of available blueprints.

Example: Deploy a Llama 3 chatbot backend

Run a Llama3 chatbot on H100 GPUs in just a few lines.

from covalent_blueprints import store_secret, save_api_key
from covalent_blueprints_ai import llama_chatbot

# Set credentials
save_api_key("<covalent-cloud-api-key>")
store_secret(name="HF_TOKEN", value="<huggingface-write-token>")

# Initialize a blueprint
bp = llama_chatbot(model_name="meta-llama/Meta-Llama-3-70B-Instruct")

# Customize compute resources (e.g. 2x H100 GPUs)
bp.executors.service_executor.gpu_type = "h100"
bp.executors.service_executor.num_gpus = 2
bp.executors.service_executor.memory = "240GB"

# Run the blueprint
llama_client = bp.run()

The llama_chatbot blueprint returns a Python client for the deployed service.

llama_client.generate(prompt="How are you feeling?", max_new_tokens=100)
How are you feeling? How are you doing?
I am feeling well, thank you for asking. I am a machine learning model, so I don't have emotions or feelings in the way that humans do.
llama_client.generate_message(
    messages=[
        {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
        {"role": "user", "content": "Who are you?"},
    ]
)
{'role': 'assistant', 'content': "Arrrr, me hearty! Me be Captain Chatterbeard, the scurviest chatbot to ever sail the seven seas o' conversation! Me be here to swab yer decks with me witty banter, me treasure trove o' knowledge, and me trusty cutlass o' clever responses! So hoist the colors, me matey, and set course fer a swashbucklin' good time! What be bringin' ye to these fair waters?"}

Release compute resources with a single line.

llama_client.teardown()

Blueprints catalogue

👉 Each link below points to an example notebook.

pip install -U covalent-blueprints-ai
Blueprint Description
Image Generator Deploy a text-to-image generator service.
Llama Chatbot Deploy a chatbot backend using a Llama-like model.
LoRA fine tuning Fine tune and deploy an LLM as a Covalent service.
vLLM Deploy an LLM using vLLM on Covalent Cloud.

More coming soon...

Contributing

Public contributions will soon be open! In the meantime, please reach out on Slack to contribute a blueprint.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

covalent_blueprints-0.4.3rc0.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

covalent_blueprints-0.4.3rc0-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file covalent_blueprints-0.4.3rc0.tar.gz.

File metadata

File hashes

Hashes for covalent_blueprints-0.4.3rc0.tar.gz
Algorithm Hash digest
SHA256 4666cbe16e5e3aaf16359d8dd8289ee7a569d503171cb553ed98b72e91aa0325
MD5 545005ded2b10aa7f06016a1ef9ebb81
BLAKE2b-256 b184e07c21774dbc7df9e65b9c6e7bdb965a05a9dfc1da14e71e35104bcdfe81

See more details on using hashes here.

File details

Details for the file covalent_blueprints-0.4.3rc0-py3-none-any.whl.

File metadata

File hashes

Hashes for covalent_blueprints-0.4.3rc0-py3-none-any.whl
Algorithm Hash digest
SHA256 2a720f151fe9740c0201f75a1ba0b0f0f3d663ae25febcd4998036bbc47de670
MD5 3d46e2fc0917744a5e439384f547e5a5
BLAKE2b-256 70e68ac4fad303c82cf78cae25aa0025724cde0e4064a075d4e0e0f03ef8b3b0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page