Skip to main content

Domain-agnostic text, image, PDF, and DOCX classification engine powered by LLMs

Project description

cat-stack

Domain-agnostic text, image, and PDF classification engine powered by LLMs.

cat-stack is the shared base package for the CatLLM ecosystem. It provides the core classification, extraction, exploration, and summarization engine that all domain-specific CatLLM packages build on.

Installation

pip install cat-stack

Optional extras:

pip install cat-stack[pdf]         # PDF support (PyMuPDF)
pip install cat-stack[embeddings]  # Embedding similarity scoring
pip install cat-stack[formatter]   # JSON formatter fallback model

Ecosystem

cat-stack is independently useful for classifying any text column. Domain-specific packages extend it with tuned prompts and workflows:

Package Domain
cat-stack General-purpose text, image, PDF classification (this package)
cat-survey Survey response classification
cat-vader Social media text (Reddit, Twitter/X)
cat-ademic Academic papers, PDFs, citations
cat-cog Cognitive assessment & visual scoring (CERAD)
cat-pol Political text (manifestos, speeches, legislation)

Installing cat-llm pulls in all of the above.

Quick Start

import cat_stack as cat

# Classify text into predefined categories
result = cat.classify(
    input_data=df["text_column"],
    categories=["Positive", "Negative", "Neutral"],
    models=[("gpt-4o", "openai", OPENAI_KEY)],
    filename="classified.csv"
)

Core API

classify()

Assign predefined categories to text, images, or PDFs. Supports single-model and multi-model ensemble classification with consensus voting.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key1), ("claude-sonnet-4-20250514", "anthropic", key2)],
    filename="results.csv"
)

Inline prompt tuning

Add prompt_tune=True to automatically optimize the classification prompt before the full run. A browser UI opens for you to correct a small sample, then the optimized prompt is used for all remaining items.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key)],
    prompt_tune=15,       # tune on 15 random items, then classify all
    tune_iterations=3,    # max attempts per category (default 3)
)

prompt_tune()

Standalone automatic prompt optimization. Iteratively refines classification prompts using user feedback — classify a sample, correct mistakes in the browser, and let the LLM generate targeted per-category instructions.

result = cat.prompt_tune(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    sample_size=15,
    max_iterations=3,
)

# Use the optimized prompt for classification
cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    system_prompt=result["system_prompt"],
)

extract()

Discover categories from a corpus using LLM-driven exploration.

cat.extract(
    input_data=df["text"],
    survey_question="What is this text about?",
    models=[("gpt-4o", "openai", key)],
)

explore()

Raw category extraction for saturation analysis.

cat.explore(
    input_data=df["text"],
    description="Describe the main themes",
    models=[("gpt-4o", "openai", key)],
)

summarize()

Summarize text or PDF documents, with optional multi-model ensemble.

cat.summarize(
    input_data=df["text"],
    models=[("gpt-4o", "openai", key)],
    filename="summaries.csv"
)

Supported Providers

OpenAI, Anthropic, Google (Gemini), Mistral, Perplexity, xAI (Grok), HuggingFace, Ollama (local models).

All providers use the same (model_name, provider, api_key) tuple format. Provider is auto-detected from model name if omitted.

Features

  • Automatic prompt optimization (prompt_tune) — correct a small sample in a browser UI, and the system generates per-category instructions that improve accuracy
  • Multi-model ensemble with consensus voting and agreement scores
  • Batch API support for OpenAI, Anthropic, Google, Mistral, and xAI
  • Prompt strategies: Chain-of-Thought, Chain-of-Verification, step-back prompting, few-shot examples
  • Text, image, and PDF input auto-detection
  • Embedding similarity tiebreaker for ensemble consensus ties
  • Pilot test — validate classifications on a small sample before committing to the full run

License

GPL-3.0-or-later

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cat_stack-1.0.1.tar.gz (466.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cat_stack-1.0.1-py3-none-any.whl (490.3 kB view details)

Uploaded Python 3

File details

Details for the file cat_stack-1.0.1.tar.gz.

File metadata

  • Download URL: cat_stack-1.0.1.tar.gz
  • Upload date:
  • Size: 466.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.1.tar.gz
Algorithm Hash digest
SHA256 22eb064b2c4513a4445fff43ecb5a4fb3adc3ba5c83c3a9a58b39b6ec225f873
MD5 75afd5b73081710a5942b4c31eea3c59
BLAKE2b-256 4b7d0f4e9609a805017aeb4c40a1c7140f16877319ceb9a41fdbf8b7597bfaa7

See more details on using hashes here.

File details

Details for the file cat_stack-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: cat_stack-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 490.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5ce97065e78883b56fbd285225984dc81d4cc8bc9746278afec5ab1da136045c
MD5 0f8104d3fb8f27078273b4e109dfd4b4
BLAKE2b-256 86de11dd95d1711668f9d6a664a8df3784382ac6cbf3f0d296a271f73b2692f2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page