Covalent Blueprints: a toolkit for creating pre-packaged, reusable Covalent projects.
Project description
Covalent Blueprints are pre-configured applications for Covalent. Each blueprint is runnable both on its own and as a component in another workflow. See the catalogue below for a list of available blueprints.
Example: Deploy a Llama 3 chatbot backend
Run a Llama3 chatbot on H100 GPUs in just a few lines.
from covalent_blueprints import store_secret, save_api_key
from covalent_blueprints_ai import llama_chatbot
# Set credentials
save_api_key("<covalent-cloud-api-key>")
store_secret(name="HF_TOKEN", value="<huggingface-write-token>")
# Initialize a blueprint
bp = llama_chatbot(model_name="meta-llama/Meta-Llama-3-70B-Instruct")
# Customize compute resources (e.g. 2x H100 GPUs)
bp.executors.service_executor.gpu_type = "h100"
bp.executors.service_executor.num_gpus = 2
bp.executors.service_executor.memory = "240GB"
# Run the blueprint
llama_client = bp.run()
The llama_chatbot
blueprint returns a Python client for the deployed service.
llama_client.generate(prompt="How are you feeling?", max_new_tokens=100)
How are you feeling? How are you doing?
I am feeling well, thank you for asking. I am a machine learning model, so I don't have emotions or feelings in the way that humans do.
llama_client.generate_message(
messages=[
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
)
{'role': 'assistant', 'content': "Arrrr, me hearty! Me be Captain Chatterbeard, the scurviest chatbot to ever sail the seven seas o' conversation! Me be here to swab yer decks with me witty banter, me treasure trove o' knowledge, and me trusty cutlass o' clever responses! So hoist the colors, me matey, and set course fer a swashbucklin' good time! What be bringin' ye to these fair waters?"}
Release compute resources with a single line.
llama_client.teardown()
Blueprints catalogue
👉 Each link below points to an example notebook.
pip install -U covalent-blueprints-ai
Blueprint | Description |
---|---|
Image Generator | Deploy a text-to-image generator service. |
Llama Chatbot | Deploy a chatbot backend using a Llama-like model. |
LoRA fine tuning | Fine tune and deploy an LLM as a Covalent service. |
vLLM | Deploy an LLM using vLLM on Covalent Cloud. |
NVIDIA Llama RAG | Deploy a retrieval-augmented generation (RAG) pipeline using multiple NVIDIA NIMs. |
More coming soon...
Contributing
Public contributions will soon be open! In the meantime, please reach out on Slack to contribute a blueprint.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file covalent_blueprints-0.6.1.tar.gz
.
File metadata
- Download URL: covalent_blueprints-0.6.1.tar.gz
- Upload date:
- Size: 35.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f794f2e6e335c3f5453c9e26c96e08a7c9fb5dd8c39226c243369407bc3e96e2 |
|
MD5 | b3ff1a5e13a6754d36f865064d648818 |
|
BLAKE2b-256 | 15bc80c2c556ae45e1c45701216a3bac9f4b5a12303de446b1ba2c37a281a40b |
File details
Details for the file covalent_blueprints-0.6.1-py3-none-any.whl
.
File metadata
- Download URL: covalent_blueprints-0.6.1-py3-none-any.whl
- Upload date:
- Size: 39.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a992486dd2ed61c06665f795cd6432f632c8d1a8fd1b09b5ecd55cf43612db1f |
|
MD5 | eb85e17d5dbe2d10b4d85cc3e70ad3f6 |
|
BLAKE2b-256 | a729ab4ad17be0298881d52deb596ec04183c1073c2439bac95fcddb529486fd |