Covalent Blueprints: a toolkit for creating pre-packaged, reusable Covalent projects.
Project description
Covalent Blueprints are pre-configured applications for Covalent. Each blueprint is runnable both on its own and as a component in another workflow. See the catalogue below for a list of available blueprints.
Example: Deploy a Llama 3 chatbot backend
Run a Llama3 chatbot on H100 GPUs in just a few lines.
from covalent_blueprints import store_secret, save_api_key
from covalent_blueprints_ai import llama_chatbot
# Set credentials
save_api_key("<covalent-cloud-api-key>")
store_secret(name="HF_TOKEN", value="<huggingface-write-token>")
# Initialize a blueprint
bp = llama_chatbot(model_name="meta-llama/Meta-Llama-3-70B-Instruct")
# Customize compute resources (e.g. 2x H100 GPUs)
bp.executors.service_executor.gpu_type = "h100"
bp.executors.service_executor.num_gpus = 2
bp.executors.service_executor.memory = "240GB"
# Run the blueprint
llama_client = bp.run()
The llama_chatbot
blueprint returns a Python client for the deployed service.
llama_client.generate(prompt="How are you feeling?", max_new_tokens=100)
How are you feeling? How are you doing?
I am feeling well, thank you for asking. I am a machine learning model, so I don't have emotions or feelings in the way that humans do.
llama_client.generate_message(
messages=[
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
)
{'role': 'assistant', 'content': "Arrrr, me hearty! Me be Captain Chatterbeard, the scurviest chatbot to ever sail the seven seas o' conversation! Me be here to swab yer decks with me witty banter, me treasure trove o' knowledge, and me trusty cutlass o' clever responses! So hoist the colors, me matey, and set course fer a swashbucklin' good time! What be bringin' ye to these fair waters?"}
Release compute resources with a single line.
llama_client.teardown()
Blueprints catalogue
👉 Each link below points to an example notebook.
pip install -U covalent-blueprints-ai
Blueprint | Description |
---|---|
Image Generator | Deploy a text-to-image generator service. |
Llama Chatbot | Deploy a chatbot backend using a Llama-like model. |
LoRA fine tuning | Fine tune and deploy an LLM as a Covalent service. |
vLLM | Deploy an LLM using vLLM on Covalent Cloud. |
NVIDIA Llama RAG | Deploy a retrieval-augmented generation (RAG) pipeline using multiple NVIDIA NIMs. |
More coming soon...
Contributing
Public contributions will soon be open! In the meantime, please reach out on Slack to contribute a blueprint.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file covalent_blueprints-0.5.0rc0.tar.gz
.
File metadata
- Download URL: covalent_blueprints-0.5.0rc0.tar.gz
- Upload date:
- Size: 35.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ad1b25ce70fcfce07cebad57aa0cf8d6f226454367355a1e8c5fef7173b61085 |
|
MD5 | f1ac9f8372b5536fca2791e9d990c854 |
|
BLAKE2b-256 | bd832ffd4a19744587545a87e0406544683a63ff9ce763efffecc9846cda4d41 |
File details
Details for the file covalent_blueprints-0.5.0rc0-py3-none-any.whl
.
File metadata
- Download URL: covalent_blueprints-0.5.0rc0-py3-none-any.whl
- Upload date:
- Size: 39.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 916f87e5a6e5c64c8680bf46ff20ca018afeb64c61927c0698de39a331147a0d |
|
MD5 | f8a281a6ab72b35b8b12806357eb1126 |
|
BLAKE2b-256 | 2e349cf96ccecea9734ba4471500986c1f13753bed03cb5616701f7e6d44f557 |