Skip to main content

The highest-level interface for various LLM APIs.

Project description

chatterer

chatterer is a Python library that provides a unified interface for interacting with various Language Model (LLM) backends. It abstracts over different providers such as OpenAI, Anthropic, DeepSeek, Ollama, and Langchain, allowing you to generate completions, stream responses, and even validate outputs using Pydantic models.


Features

  • Unified LLM Interface
    Define a common interface (LLM) for generating completions and streaming responses regardless of the underlying provider.

  • Multiple Backend Support
    Built-in support for:

    • InstructorLLM: Integrates with OpenAI, Anthropic, and DeepSeek.
    • OllamaLLM: Supports the Ollama model with optional streaming and formatting.
    • LangchainLLM: Leverages Langchain’s chat models with conversion utilities.
  • Pydantic Integration
    Easily validate and structure LLM responses by leveraging Pydantic models with methods like generate_pydantic and generate_pydantic_stream.


Installation

Assuming chatterer is published on PyPI, install it via pip:

pip install chatterer

Alternatively, clone the repository and install manually:

git clone https://github.com/yourusername/chatterer.git
cd chatterer
pip install -r requirements.txt

Usage

Importing the Library

You can import the core components directly from chatterer:

from chatterer import LLM, InstructorLLM, OllamaLLM, LangchainLLM

Example 1: Using InstructorLLM with OpenAI

from chatterer import InstructorLLM
from openai.types.chat import ChatCompletionMessageParam

# Create an instance for OpenAI using the InstructorLLM wrapper
llm = InstructorLLM.openai(call_kwargs={"model": "o3-mini"})

# Define a conversation message list
messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Hello, how can I help you?"}
]

# Generate a completion
response = llm.generate(messages)
print("Response:", response)

# Stream the response incrementally
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 2: Using OllamaLLM

from chatterer import OllamaLLM
from openai.types.chat import ChatCompletionMessageParam

# Initialize an OllamaLLM instance with streaming enabled
llm = OllamaLLM(model="ollama-model", stream=True)

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Tell me a joke."}
]

# Generate and print the full response
print("Response:", llm.generate(messages))

# Stream the response chunk by chunk
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 3: Using LangchainLLM

from chatterer import LangchainLLM
from openai.types.chat import ChatCompletionMessageParam
# Ensure you have a Langchain chat model instance; for example:
from langchain_core.language_models.chat_models import BaseChatModel

client: BaseChatModel = ...  # Initialize your Langchain chat model here
llm = LangchainLLM(client=client)

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "What is the weather like today?"}
]

# Generate a complete response
response = llm.generate(messages)
print("Response:", response)

# Stream the response
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 4: Using Pydantic for Structured Outputs

from pydantic import BaseModel
from chatterer import InstructorLLM
from openai.types.chat import ChatCompletionMessageParam

# Define a response model
class MyResponse(BaseModel):
    response: str

# Initialize the InstructorLLM instance
llm = InstructorLLM.openai()

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Summarize this text."}
]

# Generate a structured response using a Pydantic model
structured_response = llm.generate_pydantic(MyResponse, messages)
print("Structured Response:", structured_response.response)

API Overview

LLM (Abstract Base Class)

  • Methods:
    • generate(messages: Sequence[ChatCompletionMessageParam]) -> str
      Generate a complete text response from a list of messages.

    • generate_stream(messages: Sequence[ChatCompletionMessageParam]) -> Iterator[str]
      Stream the response incrementally.

    • generate_pydantic(response_model: Type[P], messages: Sequence[ChatCompletionMessageParam]) -> P
      Generate and validate the response using a Pydantic model.

    • generate_pydantic_stream(response_model: Type[P], messages: Sequence[ChatCompletionMessageParam]) -> Iterator[P]
      (Optional) Stream validated responses as Pydantic models.

InstructorLLM

  • Factory methods to create instances with various backends:
    • openai()
    • anthropic()
    • deepseek()

OllamaLLM

  • Supports additional options such as:
    • model, stream, format, tools, options, keep_alive

LangchainLLM

  • Integrates with Langchain's BaseChatModel and converts messages to a compatible format.

Contributing

Contributions are highly encouraged! If you find a bug or have a feature request, please open an issue or submit a pull request on the repository. When contributing, please ensure your code adheres to the existing style and passes all tests.


License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatterer-0.1.1.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatterer-0.1.1-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file chatterer-0.1.1.tar.gz.

File metadata

  • Download URL: chatterer-0.1.1.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for chatterer-0.1.1.tar.gz
Algorithm Hash digest
SHA256 14e7c590d95ec980740660c2c5bc5e358d1340220a259dbf7c9281a7efdb704f
MD5 8ad1eaca3e270122c44cc8fee9961596
BLAKE2b-256 97243b53e262eb8eca70d52d65ebdbf38d53c81dbecaa2586f684dfabb6701d9

See more details on using hashes here.

File details

Details for the file chatterer-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: chatterer-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for chatterer-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 283cbd4239655d3a34ce15d8c0494ddb1458e812a19985a87d1f0d81e560411e
MD5 87f1ca269e6097395932b49a050a4578
BLAKE2b-256 bc9ea58cf67d16c9943ee440cc85f068e49eb0f437d27c1fae57aadd50e20519

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page