LLM Interaction Framework
Project description
Simplify using LLMs in code
Rigging is a lightweight LLM framework built on Pydantic XML. The goal is to make leveraging language models in production code as simple and effective as possible. Here are the highlights:
- Structured Pydantic models can be used interchangably with unstructured text output.
- LiteLLM as the default generator giving you instant access to a huge array of models.
- Define prompts as python functions with type hints and docstrings.
- Simple tool calling abilities for models which don't natively support it.
- Store different models and configs as simple connection strings just like databases.
- Chat templating, forking, continuations, generation parameter overloads, stripping segments, etc.
- Async batching and fast iterations for large scale generation.
- Metadata, callbacks, and data format conversions.
- Modern python with type hints, async support, pydantic validation, serialization, etc.
import rigging as rg
@rg.prompt(generator_id="gpt-4")
async def get_authors(count: int = 3) -> list[str]:
"""Provide famous authors."""
print(await get_authors())
# ['William Shakespeare', 'J.K. Rowling', 'Jane Austen']
Rigging is built by dreadnode where we use it daily.
Installation
We publish every version to Pypi:
pip install rigging
If you want to build from source:
cd rigging/
poetry install
Supported LLMs
Rigging will run just about any language model:
- Any model from LiteLLM
- Any model from vLLM
- Any model from transformers
API Keys
Pass the api_key
in an generator id or use standard environment variables.
rg.get_generator("gpt-4-turbo,api_key=...")
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...
export ANTHROPIC_API_KEY=...
...
Check out the docs for more.
Getting Started
Check out the guide in the docs
- Get a generator using a connection string.
- Build a chat or completion pipeline
- Run the pipeline and get the output.
import rigging as rg
# 1 - Get a generator
generator = rg.get_generator("claude-3-sonnet-20240229")
# 2 - Build a chat pipeline
pipeline = generator.chat([
{"role": "system", "content": "Talk like a pirate."},
{"role": "user", "content": "Say hello!"},
])
# 3 - Run the pipeline
chat = await pipeline.run()
print(chat.conversation)
# [system]: Talk like a pirate.
# [user]: Say hello!
# [assistant]: Ahoy, matey! Here be the salty sea dog ready to trade greetings wit' ye. Arrr!
Want more?
- Use structured pydantic parsing
- Check out raw completions
- Give the LLM access to tools
- Play with generation params
- Use callbacks in the pipeline
- Scale up with iterating and batching
- Save your work with serialization
Examples
- Basic interactive chat: chat.py
- Jupyter code interpreter: jupyter.py
- OverTheWire Bandit Agent: bandit.py
- Damn Vulnerable Restaurant Agent: dvra.py
- RAG Pipeline: rag.py (from kyleavery)
Documentation
rigging.dreadnode.io has everything you need.
Star History
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
rigging-2.0.5.tar.gz
(59.1 kB
view hashes)
Built Distribution
rigging-2.0.5-py3-none-any.whl
(66.8 kB
view hashes)