Skip to main content

The highest-level interface for various LLM APIs.

Project description

chatterer

chatterer is a Python library that provides a unified interface for interacting with various Language Model (LLM) backends. It abstracts over different providers such as OpenAI, Anthropic, DeepSeek, Ollama, and Langchain, allowing you to generate completions, stream responses, and even validate outputs using Pydantic models.


Features

  • Unified LLM Interface
    Define a common interface (LLM) for generating completions and streaming responses regardless of the underlying provider.

  • Multiple Backend Support
    Built-in support for:

    • InstructorLLM: Integrates with OpenAI, Anthropic, and DeepSeek.
    • OllamaLLM: Supports the Ollama model with optional streaming and formatting.
    • LangchainLLM: Leverages Langchain’s chat models with conversion utilities.
  • Pydantic Integration
    Easily validate and structure LLM responses by leveraging Pydantic models with methods like generate_pydantic and generate_pydantic_stream.


Installation

Assuming chatterer is published on PyPI, install it via pip:

pip install chatterer

Alternatively, clone the repository and install manually:

git clone https://github.com/yourusername/chatterer.git
cd chatterer
pip install -r requirements.txt

Usage

Importing the Library

You can import the core components directly from chatterer:

from chatterer import LLM, InstructorLLM, OllamaLLM, LangchainLLM

Example 1: Using InstructorLLM with OpenAI

from chatterer import InstructorLLM
from openai.types.chat import ChatCompletionMessageParam

# Create an instance for OpenAI using the InstructorLLM wrapper
llm = InstructorLLM.openai(call_kwargs={"model": "o3-mini"})

# Define a conversation message list
messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Hello, how can I help you?"}
]

# Generate a completion
response = llm.generate(messages)
print("Response:", response)

# Stream the response incrementally
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 2: Using OllamaLLM

from chatterer import OllamaLLM
from openai.types.chat import ChatCompletionMessageParam

# Initialize an OllamaLLM instance with streaming enabled
llm = OllamaLLM(model="ollama-model", stream=True)

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Tell me a joke."}
]

# Generate and print the full response
print("Response:", llm.generate(messages))

# Stream the response chunk by chunk
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 3: Using LangchainLLM

from chatterer import LangchainLLM
from openai.types.chat import ChatCompletionMessageParam
# Ensure you have a Langchain chat model instance; for example:
from langchain_core.language_models.chat_models import BaseChatModel

client: BaseChatModel = ...  # Initialize your Langchain chat model here
llm = LangchainLLM(client=client)

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "What is the weather like today?"}
]

# Generate a complete response
response = llm.generate(messages)
print("Response:", response)

# Stream the response
print("Streaming response:")
for chunk in llm.generate_stream(messages):
    print(chunk, end="")

Example 4: Using Pydantic for Structured Outputs

from pydantic import BaseModel
from chatterer import InstructorLLM
from openai.types.chat import ChatCompletionMessageParam

# Define a response model
class MyResponse(BaseModel):
    response: str

# Initialize the InstructorLLM instance
llm = InstructorLLM.openai()

messages: list[ChatCompletionMessageParam] = [
    {"role": "user", "content": "Summarize this text."}
]

# Generate a structured response using a Pydantic model
structured_response = llm.generate_pydantic(MyResponse, messages)
print("Structured Response:", structured_response.response)

API Overview

LLM (Abstract Base Class)

  • Methods:
    • generate(messages: Sequence[ChatCompletionMessageParam]) -> str
      Generate a complete text response from a list of messages.

    • generate_stream(messages: Sequence[ChatCompletionMessageParam]) -> Iterator[str]
      Stream the response incrementally.

    • generate_pydantic(response_model: Type[P], messages: Sequence[ChatCompletionMessageParam]) -> P
      Generate and validate the response using a Pydantic model.

    • generate_pydantic_stream(response_model: Type[P], messages: Sequence[ChatCompletionMessageParam]) -> Iterator[P]
      (Optional) Stream validated responses as Pydantic models.

InstructorLLM

  • Factory methods to create instances with various backends:
    • openai()
    • anthropic()
    • deepseek()

OllamaLLM

  • Supports additional options such as:
    • model, stream, format, tools, options, keep_alive

LangchainLLM

  • Integrates with Langchain's BaseChatModel and converts messages to a compatible format.

Contributing

Contributions are highly encouraged! If you find a bug or have a feature request, please open an issue or submit a pull request on the repository. When contributing, please ensure your code adheres to the existing style and passes all tests.


License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatterer-0.1.0.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatterer-0.1.0-py3-none-any.whl (5.1 kB view details)

Uploaded Python 3

File details

Details for the file chatterer-0.1.0.tar.gz.

File metadata

  • Download URL: chatterer-0.1.0.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for chatterer-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a357d57de35e03227d13da631ffe9d0f3423d25b1d08e750d3022d5d3316d8e2
MD5 6f20a808e74ab0350ecbbf3e2e0e673b
BLAKE2b-256 9dd10d73c4a46f8f4481f506c4f00f79e5c4389466a461e4bfbb8b0bd3bb9ed1

See more details on using hashes here.

File details

Details for the file chatterer-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: chatterer-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 5.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for chatterer-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b956bbe2e3ea9a8eabafd4af2d4b37c037ec4d974bb65a045a705b9c94f4957
MD5 7786046b19e4a3f8a3173d349d332db8
BLAKE2b-256 ece31b11ff3068e3fe713863caf0ae4b58bb4adb9951fac5c95fb9e3768c39b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page