Lightweight wrapper for cortecs.ai enabling ⚡️ instant provisioning
Project description
cortecs-py
Lightweight wrapper for the cortecs.ai enabling instant provisioning.
⚡Quickstart
Dynamic provisioning allows you to run LLM-workflows on dedicated compute. The LLM and underlying resources are automatically provisioned for the duration of use, providing maximum cost-efficiency. Once the workflow is complete, the infrastructure is automatically shut down.
This library starts and stops your resources. The logic can be implemented using popular frameworks such as LangChain or crewAI.
- Start your LLM
- Execute your (batch) jobs
- Shutdown your LLM
from cortecs_py.client import Cortecs
from cortecs_py.integrations import DedicatedLLM
cortecs = Cortecs()
with DedicatedLLM(client=cortecs, model_name='neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8') as llm:
essay = llm.invoke('Write an essay about dynamic provisioning')
print(essay.content)
Example
Install
pip install cortecs-py
Summarizing documents
First, set up the environment variables. Use your credentials from cortecs.ai.
export OPENAI_API_KEY="<YOUR_CORTECS_API_KEY>"
export CORTECS_CLIENT_ID="<YOUR_ID>"
export CORTECS_CLIENT_SECRET="<YOUR_SECRET>"
This example shows how to use LangChain to configure a simple translation chain. The llm is dynamically provisioned and the chain is executed in parallel.
from langchain_community.document_loaders import ArxivLoader
from langchain_core.prompts import ChatPromptTemplate
from cortecs_py.client import Cortecs
from cortecs_py.integrations import DedicatedLLM
cortecs = Cortecs()
loader = ArxivLoader(
query="reasoning",
load_max_docs=20,
get_ful_documents=True,
doc_content_chars_max=25000, # ~6.25k tokens, make sure the models supports that context length
load_all_available_meta=False
)
prompt = ChatPromptTemplate.from_template("{text}\n\n Explain to me like I'm five:")
docs = loader.load()
with DedicatedLLM(client=cortecs, model_name='neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8') as llm:
chain = prompt | llm
print("Processing data batch-wise ...")
summaries = chain.batch([{"text": doc.page_content} for doc in docs])
for summary in summaries:
print(summary.content + '-------\n\n\n')
This simple example showcases the power of dynamic provisioning. We summarized 128.6k input tokens into 7.9k output tokens in 35 seconds. The llm can be fully utilized in those 35 seconds enabling better cost efficiency.
Use Cases
- Batch processing
- Low latency -> How to process reddit in realtime
- Multi-agents -> How to use CrewAI without request limits
- High-security
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cortecs_py-0.0.2.tar.gz
.
File metadata
- Download URL: cortecs_py-0.0.2.tar.gz
- Upload date:
- Size: 11.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0e5f25ce3ccf8028959ff4abcaa266bfe70b56a268e16d052516c5ace3215025 |
|
MD5 | 81fbd22ef0d0aa14db13df079dfb1a3c |
|
BLAKE2b-256 | da348f98abf99c062197503667fe7e3503e8e9889fe46b23c2bf445e6757a86a |
File details
Details for the file cortecs_py-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: cortecs_py-0.0.2-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2df731ff8dd427a055d7fc0bd0ba4251a07897a2cba0f72f56ff1a8546a99283 |
|
MD5 | fc11289507ddfcc571235a7c9ed2d8b7 |
|
BLAKE2b-256 | 0ab89d165eb0e076829989a058bce17634d56351434061cdcb2b4178434b63a5 |