Skip to main content

Domain-agnostic text, image, PDF, and DOCX classification engine powered by LLMs

Project description

cat-stack

Domain-agnostic text, image, and PDF classification engine powered by LLMs.

cat-stack is the shared base package for the CatLLM ecosystem. It provides the core classification, extraction, exploration, and summarization engine that all domain-specific CatLLM packages build on.

Installation

pip install cat-stack

Optional extras:

pip install cat-stack[pdf]         # PDF support (PyMuPDF)
pip install cat-stack[embeddings]  # Embedding similarity scoring
pip install cat-stack[formatter]   # JSON formatter fallback model

Ecosystem

cat-stack is independently useful for classifying any text column. Domain-specific packages extend it with tuned prompts and workflows:

Package Domain
cat-stack General-purpose text, image, PDF classification (this package)
cat-survey Survey response classification
cat-vader Social media text (Reddit, Twitter/X)
cat-ademic Academic papers, PDFs, citations
cat-cog Cognitive assessment & visual scoring (CERAD)
cat-pol Political text (manifestos, speeches, legislation)

Installing cat-llm pulls in all of the above.

Quick Start

import cat_stack as cat

# Classify text into predefined categories
result = cat.classify(
    input_data=df["text_column"],
    categories=["Positive", "Negative", "Neutral"],
    models=[("gpt-4o", "openai", OPENAI_KEY)],
    filename="classified.csv"
)

Core API

classify()

Assign predefined categories to text, images, or PDFs. Supports single-model and multi-model ensemble classification with consensus voting.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key1), ("claude-sonnet-4-20250514", "anthropic", key2)],
    filename="results.csv"
)

Inline prompt tuning

Add prompt_tune=True to automatically optimize the classification prompt before the full run. A browser UI opens for you to correct a small sample, then the optimized prompt is used for all remaining items.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key)],
    prompt_tune=15,       # tune on 15 random items, then classify all
    tune_iterations=3,    # max attempts per category (default 3)
)

prompt_tune()

Standalone automatic prompt optimization. Iteratively refines classification prompts using user feedback — classify a sample, correct mistakes in the browser, and let the LLM generate targeted per-category instructions.

result = cat.prompt_tune(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    sample_size=15,
    max_iterations=3,
)

# Use the optimized prompt for classification
cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    system_prompt=result["system_prompt"],
)

extract()

Discover categories from a corpus using LLM-driven exploration.

cat.extract(
    input_data=df["text"],
    survey_question="What is this text about?",
    models=[("gpt-4o", "openai", key)],
)

explore()

Raw category extraction for saturation analysis.

cat.explore(
    input_data=df["text"],
    description="Describe the main themes",
    models=[("gpt-4o", "openai", key)],
)

summarize()

Summarize text or PDF documents, with optional multi-model ensemble.

cat.summarize(
    input_data=df["text"],
    models=[("gpt-4o", "openai", key)],
    filename="summaries.csv"
)

Supported Providers

OpenAI, Anthropic, Google (Gemini), Mistral, Perplexity, xAI (Grok), HuggingFace, Ollama (local models).

All providers use the same (model_name, provider, api_key) tuple format. Provider is auto-detected from model name if omitted.

Features

  • Automatic prompt optimization (prompt_tune) — correct a small sample in a browser UI, and the system generates per-category instructions that improve accuracy
  • Multi-model ensemble with consensus voting and agreement scores
  • Batch API support for OpenAI, Anthropic, Google, Mistral, and xAI
  • Prompt strategies: Chain-of-Thought, Chain-of-Verification, step-back prompting, few-shot examples
  • Text, image, and PDF input auto-detection
  • Embedding similarity tiebreaker for ensemble consensus ties
  • Pilot test — validate classifications on a small sample before committing to the full run

License

GPL-3.0-or-later

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cat_stack-1.0.6.tar.gz (469.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cat_stack-1.0.6-py3-none-any.whl (492.9 kB view details)

Uploaded Python 3

File details

Details for the file cat_stack-1.0.6.tar.gz.

File metadata

  • Download URL: cat_stack-1.0.6.tar.gz
  • Upload date:
  • Size: 469.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.6.tar.gz
Algorithm Hash digest
SHA256 a0135970fab3e7d47569c758f573202eb27200af921b526f00e2002467c66cfc
MD5 ff53edfba0901dc826caadb3d17945e1
BLAKE2b-256 dc25d1e7e358b96ba123dd28e2e6bd2011d786a9fd5cc37d4ebb7251ff14ce9a

See more details on using hashes here.

File details

Details for the file cat_stack-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: cat_stack-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 492.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 dd6bf4f7dd8153797fab4de29fdb7d7fc2a4a472bcee1d6b412eb30ccc3b26a4
MD5 9aa32692363e70c64782e168e14b526f
BLAKE2b-256 20b978de31eac5a0cf0b6143dfc989094676b97d153d4d3e5295f4e326bc9a7a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page