Skip to main content

LLM Interaction Framework

Project description

rigging

Simplify using LLMs in code

PyPI - Python Version PyPI - Version GitHub License GitHub Actions Workflow Status


Rigging is a lightweight LLM framework built on Pydantic XML. The goal is to make leveraging language models in production code as simple and effictive as possible. Here are the highlights:

  • Structured Pydantic models can be used interchangably with unstructured text output.
  • LiteLLM as the default generator giving you instant access to a huge array of models.
  • Define prompts as python functions with type hints and docstrings.
  • Simple tool calling abilities for models which don't natively support it.
  • Store different models and configs as simple connection strings just like databases.
  • Chat templating, forking, continuations, generation parameter overloads, stripping segments, etc.
  • Async batching and fast iterations for large scale generation.
  • Metadata, callbacks, and data format conversions.
  • Modern python with type hints, async support, pydantic validation, serialization, etc.
import rigging as rg

@rg.prompt(generator_id="gpt-4")
async def get_authors(count: int = 3) -> list[str]:
    """Provide famous authors."""

print(await get_authors())

# ['William Shakespeare', 'J.K. Rowling', 'Jane Austen']

Rigging is built by dreadnode where we use it daily.

Installation

We publish every version to Pypi:

pip install rigging

If you want to build from source:

cd rigging/
poetry install

Supported LLMs

Rigging will run just about any language model:

API Keys

Pass the api_key in an generator id or use standard environment variables.

rg.get_generator("gpt-4-turbo,api_key=...")
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...
export ANTHROPIC_API_KEY=...
...

Check out the docs for more.

Getting Started

Check out the guide in the docs

  1. Get a generator using a connection string.
  2. Build a chat or completion pipeline
  3. Run the pipeline and get the output.
import rigging as rg 

# 1 - Get a generator
generator = rg.get_generator("claude-3-sonnet-20240229")

# 2 - Build a chat pipeline
pipeline = generator.chat([
    {"role": "system", "content": "Talk like a pirate."},
    {"role": "user", "content": "Say hello!"},
])

# 3 - Run the pipeline
chat = pipeline.run()
print(chat.conversation)

# [system]: Talk like a pirate.
# [user]: Say hello!
# [assistant]: Ahoy, matey! Here be the salty sea dog ready to trade greetings wit' ye. Arrr!

Want more?

Examples

Documentation

rigging.dreadnode.io has everything you need.

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rigging-2.0.0.tar.gz (53.3 kB view hashes)

Uploaded Source

Built Distribution

rigging-2.0.0-py3-none-any.whl (61.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page