Skip to main content

Domain-agnostic text, image, PDF, and DOCX classification engine powered by LLMs

Project description

cat-stack

Domain-agnostic text, image, and PDF classification engine powered by LLMs.

cat-stack is the shared base package for the CatLLM ecosystem. It provides the core classification, extraction, exploration, and summarization engine that all domain-specific CatLLM packages build on.

Installation

pip install cat-stack

Optional extras:

pip install cat-stack[pdf]         # PDF support (PyMuPDF)
pip install cat-stack[embeddings]  # Embedding similarity scoring
pip install cat-stack[formatter]   # JSON formatter fallback model

Ecosystem

cat-stack is independently useful for classifying any text column. Domain-specific packages extend it with tuned prompts and workflows:

Package Domain
cat-stack General-purpose text, image, PDF classification (this package)
cat-survey Survey response classification
cat-vader Social media text (Reddit, Twitter/X)
cat-ademic Academic papers, PDFs, citations
cat-cog Cognitive assessment & visual scoring (CERAD)
cat-pol Political text (manifestos, speeches, legislation)

Installing cat-llm pulls in all of the above.

Quick Start

import cat_stack as cat

# Classify text into predefined categories
result = cat.classify(
    input_data=df["text_column"],
    categories=["Positive", "Negative", "Neutral"],
    models=[("gpt-4o", "openai", OPENAI_KEY)],
    filename="classified.csv"
)

Core API

classify()

Assign predefined categories to text, images, or PDFs. Supports single-model and multi-model ensemble classification with consensus voting.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key1), ("claude-sonnet-4-20250514", "anthropic", key2)],
    filename="results.csv"
)

Inline prompt tuning

Add prompt_tune=True to automatically optimize the classification prompt before the full run. A browser UI opens for you to correct a small sample, then the optimized prompt is used for all remaining items.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key)],
    prompt_tune=15,       # tune on 15 random items, then classify all
    tune_iterations=3,    # max attempts per category (default 3)
)

prompt_tune()

Standalone automatic prompt optimization. Iteratively refines classification prompts using user feedback — classify a sample, correct mistakes in the browser, and let the LLM generate targeted per-category instructions.

result = cat.prompt_tune(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    sample_size=15,
    max_iterations=3,
)

# Use the optimized prompt for classification
cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    system_prompt=result["system_prompt"],
)

extract()

Discover categories from a corpus using LLM-driven exploration.

cat.extract(
    input_data=df["text"],
    survey_question="What is this text about?",
    models=[("gpt-4o", "openai", key)],
)

explore()

Raw category extraction for saturation analysis.

cat.explore(
    input_data=df["text"],
    description="Describe the main themes",
    models=[("gpt-4o", "openai", key)],
)

summarize()

Summarize text or PDF documents, with optional multi-model ensemble.

cat.summarize(
    input_data=df["text"],
    models=[("gpt-4o", "openai", key)],
    filename="summaries.csv"
)

Supported Providers

OpenAI, Anthropic, Google (Gemini), Mistral, Perplexity, xAI (Grok), HuggingFace, Ollama (local models).

All providers use the same (model_name, provider, api_key) tuple format. Provider is auto-detected from model name if omitted.

Features

  • Automatic prompt optimization (prompt_tune) — correct a small sample in a browser UI, and the system generates per-category instructions that improve accuracy
  • Multi-model ensemble with consensus voting and agreement scores
  • Batch API support for OpenAI, Anthropic, Google, Mistral, and xAI
  • Prompt strategies: Chain-of-Thought, Chain-of-Verification, step-back prompting, few-shot examples
  • Text, image, and PDF input auto-detection
  • Embedding similarity tiebreaker for ensemble consensus ties
  • Pilot test — validate classifications on a small sample before committing to the full run

License

GPL-3.0-or-later

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cat_stack-1.0.2.tar.gz (466.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cat_stack-1.0.2-py3-none-any.whl (490.5 kB view details)

Uploaded Python 3

File details

Details for the file cat_stack-1.0.2.tar.gz.

File metadata

  • Download URL: cat_stack-1.0.2.tar.gz
  • Upload date:
  • Size: 466.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.2.tar.gz
Algorithm Hash digest
SHA256 95dcc050d7618dc82175df833f1118db421a1a8dff3c9d10f9a19b1dd4fe46ec
MD5 255c207ba53fb7b229228d1a00d3414a
BLAKE2b-256 5d256a1109e299b118825168a24d1c9adaffe3b5364e2f14c65203fc47d2b9e0

See more details on using hashes here.

File details

Details for the file cat_stack-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: cat_stack-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 490.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1e9dbb8918dba8dd7e7615d5cb0da59fdcc53e129fdf6078e4b5860b12649758
MD5 e39147ceea2a474f3ff42a8966d837be
BLAKE2b-256 f81a7439b8c2870356b25b6ceacba0d99d833070bb5be6353727ae3679afd2bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page