Skip to main content

Covalent Blueprints: a toolkit for creating pre-packaged, reusable Covalent projects.

Project description

Covalent Blueprints Banner

Plug-and-play Covalent workflows and service deployments.

Covalent Blueprints are pre-configured applications for Covalent. Each blueprint is runnable both on its own and as a component in another workflow. See the catalogue below for a list of available blueprints.

Example: Deploy a Llama 3 chatbot backend

Run a Llama3 chatbot on H100 GPUs in just a few lines.

from covalent_blueprints import store_secret, save_api_key
from covalent_blueprints_ai import llama_chatbot

# Set credentials
save_api_key("<covalent-cloud-api-key>")
store_secret(name="HF_TOKEN", value="<huggingface-write-token>")

# Initialize a blueprint
bp = llama_chatbot(model_name="meta-llama/Meta-Llama-3-70B-Instruct")

# Customize compute resources (e.g. 2x H100 GPUs)
bp.executors.service_executor.gpu_type = "h100"
bp.executors.service_executor.num_gpus = 2
bp.executors.service_executor.memory = "240GB"

# Run the blueprint
llama_client = bp.run()

The llama_chatbot blueprint returns a Python client for the deployed service.

llama_client.generate(prompt="How are you feeling?", max_new_tokens=100)
How are you feeling? How are you doing?
I am feeling well, thank you for asking. I am a machine learning model, so I don't have emotions or feelings in the way that humans do.
llama_client.generate_message(
    messages=[
        {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
        {"role": "user", "content": "Who are you?"},
    ]
)
{'role': 'assistant', 'content': "Arrrr, me hearty! Me be Captain Chatterbeard, the scurviest chatbot to ever sail the seven seas o' conversation! Me be here to swab yer decks with me witty banter, me treasure trove o' knowledge, and me trusty cutlass o' clever responses! So hoist the colors, me matey, and set course fer a swashbucklin' good time! What be bringin' ye to these fair waters?"}

Release compute resources with a single line.

llama_client.teardown()

Blueprints catalogue

👉 Each link below points to an example notebook.

pip install -U covalent-blueprints-ai
Blueprint Description
Image Generator Deploy a text-to-image generator service.
Llama Chatbot Deploy a chatbot backend using a Llama-like model.
LoRA fine tuning Fine tune and deploy an LLM as a Covalent service.
vLLM Deploy an LLM using vLLM on Covalent Cloud.

More coming soon...

Contributing

Public contributions will soon be open! In the meantime, please reach out on Slack to contribute a blueprint.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

covalent_blueprints-0.1.1.tar.gz (32.3 kB view hashes)

Uploaded Source

Built Distribution

covalent_blueprints-0.1.1-py3-none-any.whl (35.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page