Skip to main content

Domain-agnostic text, image, PDF, and DOCX classification engine powered by LLMs

Project description

cat-stack

Domain-agnostic text, image, and PDF classification engine powered by LLMs.

cat-stack is the shared base package for the CatLLM ecosystem. It provides the core classification, extraction, exploration, and summarization engine that all domain-specific CatLLM packages build on.

Installation

pip install cat-stack

Optional extras:

pip install cat-stack[pdf]         # PDF support (PyMuPDF)
pip install cat-stack[embeddings]  # Embedding similarity scoring
pip install cat-stack[formatter]   # JSON formatter fallback model

Ecosystem

cat-stack is independently useful for classifying any text column. Domain-specific packages extend it with tuned prompts and workflows:

Package Domain
cat-stack General-purpose text, image, PDF classification (this package)
cat-survey Survey response classification
cat-vader Social media text (Reddit, Twitter/X)
cat-ademic Academic papers, PDFs, citations
cat-cog Cognitive assessment & visual scoring (CERAD)
cat-pol Political text (manifestos, speeches, legislation)

Installing cat-llm pulls in all of the above.

Quick Start

import cat_stack as cat

# Classify text into predefined categories
result = cat.classify(
    input_data=df["text_column"],
    categories=["Positive", "Negative", "Neutral"],
    models=[("gpt-4o", "openai", OPENAI_KEY)],
    filename="classified.csv"
)

Core API

classify()

Assign predefined categories to text, images, or PDFs. Supports single-model and multi-model ensemble classification with consensus voting.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key1), ("claude-sonnet-4-20250514", "anthropic", key2)],
    filename="results.csv"
)

Inline prompt tuning

Add prompt_tune=True to automatically optimize the classification prompt before the full run. A browser UI opens for you to correct a small sample, then the optimized prompt is used for all remaining items.

cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    models=[("gpt-4o", "openai", key)],
    prompt_tune=15,       # tune on 15 random items, then classify all
    tune_iterations=3,    # max attempts per category (default 3)
)

prompt_tune()

Standalone automatic prompt optimization. Iteratively refines classification prompts using user feedback — classify a sample, correct mistakes in the browser, and let the LLM generate targeted per-category instructions.

result = cat.prompt_tune(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    sample_size=15,
    max_iterations=3,
)

# Use the optimized prompt for classification
cat.classify(
    input_data=df["text"],
    categories=["Cat A", "Cat B", "Cat C"],
    api_key="your-key",
    system_prompt=result["system_prompt"],
)

extract()

Discover categories from a corpus using LLM-driven exploration.

cat.extract(
    input_data=df["text"],
    survey_question="What is this text about?",
    models=[("gpt-4o", "openai", key)],
)

explore()

Raw category extraction for saturation analysis.

cat.explore(
    input_data=df["text"],
    description="Describe the main themes",
    models=[("gpt-4o", "openai", key)],
)

summarize()

Summarize text or PDF documents, with optional multi-model ensemble.

cat.summarize(
    input_data=df["text"],
    models=[("gpt-4o", "openai", key)],
    filename="summaries.csv"
)

Supported Providers

OpenAI, Anthropic, Google (Gemini), Mistral, Perplexity, xAI (Grok), HuggingFace, Ollama (local models).

All providers use the same (model_name, provider, api_key) tuple format. Provider is auto-detected from model name if omitted.

Features

  • Automatic prompt optimization (prompt_tune) — correct a small sample in a browser UI, and the system generates per-category instructions that improve accuracy
  • Multi-model ensemble with consensus voting and agreement scores
  • Batch API support for OpenAI, Anthropic, Google, Mistral, and xAI
  • Prompt strategies: Chain-of-Thought, Chain-of-Verification, step-back prompting, few-shot examples
  • Text, image, and PDF input auto-detection
  • Embedding similarity tiebreaker for ensemble consensus ties
  • Pilot test — validate classifications on a small sample before committing to the full run

License

GPL-3.0-or-later

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cat_stack-1.0.9.tar.gz (469.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cat_stack-1.0.9-py3-none-any.whl (492.9 kB view details)

Uploaded Python 3

File details

Details for the file cat_stack-1.0.9.tar.gz.

File metadata

  • Download URL: cat_stack-1.0.9.tar.gz
  • Upload date:
  • Size: 469.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.9.tar.gz
Algorithm Hash digest
SHA256 6dde72a502480aa807ce475c5ca4b41a11bb02711d52a094b4eb762c94ce2aad
MD5 a24afe72abf8825d136ef906ebab052f
BLAKE2b-256 71306c22d4569c5824b6c391e6985186bfb97be32dc5a641f640b10348993402

See more details on using hashes here.

File details

Details for the file cat_stack-1.0.9-py3-none-any.whl.

File metadata

  • Download URL: cat_stack-1.0.9-py3-none-any.whl
  • Upload date:
  • Size: 492.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.14

File hashes

Hashes for cat_stack-1.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 0eac7c59538cb891d0877068381151bb09e4a7ca2a8333e9ff06dfe75f0fffc2
MD5 15c4fecadc0ff3a012044ae2609401cd
BLAKE2b-256 5cebc575d7f8ec7e9757ea267b9573214c34705d5b30f4c89cff8c016783b98d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page