Skip to main content

The highest-level interface for various LLM APIs.

Project description

Chatterer

Simplified, Structured AI Assistant Framework

chatterer is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Google Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, asynchronous calls, image description, code execution, and an interactive shell.

The structured reasoning in chatterer is inspired by the Atom-of-Thought pipeline.


Quick Install

pip install chatterer

Quickstart Example

Generate text quickly using OpenAI. Messages can be input as plain strings or structured lists:

from chatterer import Chatterer, HumanMessage, AIMessage, SystemMessage

# Initialize the Chatterer with `openai`, `anthropic`, `google`, or `ollama` models
chatterer: Chatterer = Chatterer.openai("gpt-4.1")

# Get direct response as str
response: str = chatterer("What is the meaning of life?")
# response = chatterer([{ "role": "user", "content": "What is the meaning of life?" }])
# response = chatterer([("user", "What is the meaning of life?")])
# response = chatterer([HumanMessage("What is the meaning of life?")])
print(response)

Image & text content can be sent as together:

from chatterer import Base64Image, HumanMessage

# Load an image from a file or URL, resulting in a None or Base64Image object
image = Base64Image.from_url_or_path("example.jpg")
# image = Base64Image.from_url_or_path("https://example.com/image.jpg")
assert image is not None, "Failed to load image"

# Alternatively, load an image from bytes
# with open("example.jpg", "rb") as f:
#     image = Base64Image.from_bytes(f.read(), ext="jpeg")

message = HumanMessage(["Describe the image", image.data_uri_content])
response: str = chatterer([message])
print(response)

Structured Output with Pydantic

Define a Pydantic model and get typed responses:

from pydantic import BaseModel

class AnswerModel(BaseModel):
    question: str
    answer: str

# Call with response_model
response: AnswerModel = chatterer("What's the capital of France?", response_model=AnswerModel)
print(response.question, response.answer)

Async Example

Use asynchronous generation for non-blocking operations:

import asyncio

async def main():
    response = await chatterer.agenerate("Explain async in Python briefly.")
    print(response)

asyncio.run(main())

Streaming Structured Outputs

Stream structured responses in real-time:

from pydantic import BaseModel

class AnswerModel(BaseModel):
    text: str

chatterer = Chatterer.openai()
for chunk in chatterer.generate_pydantic_stream(AnswerModel, "Tell me a story"):
    print(chunk.text)

Asynchronous version:

import asyncio

async def main():
    async for chunk in chatterer.agenerate_pydantic_stream(AnswerModel, "Tell me a story"):
        print(chunk.text)

asyncio.run(main())

Image Description

Generate descriptions for images using the language model:

description = chatterer.describe_image("https://example.com/image.jpg")
print(description)

# Customize the instruction
description = chatterer.describe_image("https://example.com/image.jpg", instruction="Describe the main objects in the image.")

An asynchronous version is also available:

async def main():
    description = await chatterer.adescribe_image("https://example.com/image.jpg")
    print(description)

asyncio.run(main())

Code Execution

Generate and execute Python code dynamically:

result = chatterer.exec("Write a function to calculate factorial.")
print(result.code)
print(result.output)

An asynchronous version exists as well:

async def main():
    result = await chatterer.aexec("Write a function to calculate factorial.")
    print(result.output)

asyncio.run(main())

Webpage to Markdown

Convert webpages to Markdown, optionally filtering content with the language model:

from chatterer.tools.web2md import PlayWrightBot

with PlayWrightBot() as bot:
    # Basic conversion
    markdown = bot.url_to_md("https://example.com")
    print(markdown)

    # With LLM filtering and image descriptions
    filtered_md = bot.url_to_md_with_llm("https://example.com", describe_images=True)
    print(filtered_md)

Asynchronous version:

import asyncio

async def main():
    async with PlayWrightBot() as bot:
        markdown = await bot.aurl_to_md_with_llm("https://example.com")
        print(markdown)

asyncio.run(main())

Extract specific elements:

with PlayWrightBot() as bot:
    headings = bot.select_and_extract("https://example.com", "h2")
    print(headings)

Citation Chunking

Chunk documents into semantic sections with citations:

from chatterer import Chatterer
from chatterer.tools import citation_chunker

chatterer = Chatterer.openai()
document = "Long text about quantum computing..."
chunks = citation_chunker(document, chatterer, global_coverage_threshold=0.9)
for chunk in chunks:
    print(f"Subject: {chunk.name}")
    for source, matches in chunk.references.items():
        print(f"  Source: {source}, Matches: {matches}")

Interactive Shell

Engage in a conversational AI session with code execution support:

from chatterer import interactive_shell

interactive_shell()

This launches an interactive session where you can chat with the AI and execute code snippets. Type quit or exit to end the session.


Atom-of-Thought Pipeline (AoT)

Structured reasoning inspired by Atom-of-Thought. Decomposes questions recursively, generates answers in parallel, and ensembles the best result.

Reusable Pipeline

from chatterer.strategies import aot_pipeline

# Create once, use many times
pipeline = aot_pipeline(
    Chatterer.openai(),
    max_depth=2,           # Recursion depth
    max_sub_questions=3,   # Max sub-questions per level
    max_workers=4,         # Parallel workers
)

result1 = pipeline("First question")
result2 = pipeline("Second question")

Result Object

result.answer             # Final answer (also via str(result))
result.confidence         # 0.0 - 1.0
result.direct_answer      # Direct path answer
result.decompose_answer   # Decomposition path answer
result.sub_questions      # List[SubQuestion] with .question and .answer
result.contracted_question

Progress Tracking

pipeline = aot_pipeline(
    chatterer,
    on_progress=lambda stage, msg: print(f"[{stage}] {msg}")
)
result = pipeline("Your question")
# [start] Processing: Your question...
# [direct] Direct answer: ...
# [decompose] Decomposing at depth 2...
# [ensemble] Final answer (confidence: 0.85)

Supported Models

Chatterer supports multiple language models, easily initialized as follows:

  • OpenAI
  • Anthropic
  • Google Gemini
  • Ollama (local models)
openai_chatterer = Chatterer.openai("gpt-4o-mini")
anthropic_chatterer = Chatterer.anthropic("claude-3-7-sonnet-20250219")
gemini_chatterer = Chatterer.google("gemini-2.0-flash")
ollama_chatterer = Chatterer.ollama("deepseek-r1:1.5b")

Advanced Features

  • Streaming Responses: Use generate_stream or agenerate_stream for real-time output.
  • Streaming Structured Outputs: Stream Pydantic-typed responses with generate_pydantic_stream or agenerate_pydantic_stream.
  • Async/Await Support: All methods have asynchronous counterparts (e.g., agenerate, adescribe_image).
  • Structured Outputs: Leverage Pydantic models for typed responses.
  • Image Description: Generate descriptions for images with describe_image.
  • Code Execution: Dynamically generate and execute Python code with exec.
  • Webpage to Markdown: Convert webpages to Markdown with PlayWrightBot, including JavaScript rendering, element extraction, and LLM-based content filtering.
  • Citation Chunking: Semantically chunk documents and extract citations with citation_chunker, including coverage analysis.
  • Interactive Shell: Use interactive_shell for conversational AI with code execution.
  • Token Counting: Retrieve input/output token counts with get_num_tokens_from_message.
  • Utilities: Tools for content processing (e.g., html_to_markdown, pdf_to_text, get_youtube_video_subtitle, citation_chunker) are available in the tools module.
# Example: Convert PDF to text
from chatterer.tools import pdf_to_text
text = pdf_to_text("example.pdf")
print(text)

# Example: Get YouTube subtitles
from chatterer.tools import get_youtube_video_subtitle
subtitles = get_youtube_video_subtitle("https://www.youtube.com/watch?v=example")
print(subtitles)

# Example: Get token counts
from chatterer.messages import HumanMessage
msg = HumanMessage(content="Hello, world!")
tokens = chatterer.get_num_tokens_from_message(msg)
if tokens:
    input_tokens, output_tokens = tokens
    print(f"Input: {input_tokens}, Output: {output_tokens}")

Contributing

We welcome contributions! Feel free to open an issue or submit a pull request on the repository.


License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatterer-0.3.0.tar.gz (107.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatterer-0.3.0-py3-none-any.whl (117.4 kB view details)

Uploaded Python 3

File details

Details for the file chatterer-0.3.0.tar.gz.

File metadata

  • Download URL: chatterer-0.3.0.tar.gz
  • Upload date:
  • Size: 107.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for chatterer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 24a36a475160b5800c89fc6b8f4402b33fb765250f58545cfc26e09b55eb37f3
MD5 6221b27d100cb008af1404076de2f019
BLAKE2b-256 2a7b6ab9e41aaff73bc5f5f68a1cec635f696e4047bfac28afb2f9fdd609caef

See more details on using hashes here.

File details

Details for the file chatterer-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: chatterer-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 117.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for chatterer-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 466b9b35e1f378e68efbbea493fddaaaffa58ef9802116a733866c8a8a7bf970
MD5 6dbbc32180189ae4be531396bef0867c
BLAKE2b-256 7d81e62621d5238865bb6f8d180b8fcfb2cb871123fdbdb521e8c4f4911352a6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page