A tool for categorizing text data and images using LLMs and vision models
Project description
cat-llm
CatLLM: A Reproducible LLM Pipeline for Coding Open-Ended Survey Responses
The Problem
If you work with open-ended survey data, you know the pain: hundreds or thousands of free-text responses that need to be categorized before you can do any quantitative analysis. The traditional approach is manual coding—either doing it yourself or hiring research assistants. It's slow, expensive, and doesn't scale.
The Solution
CatLLM is a Python package designed specifically for survey research that uses LLMs to automate the categorization of open-ended responses. It handles both:
- Category Assignment: Classify responses into your predefined categories (multi-label supported)
- Category Extraction: Automatically discover and extract categories from your data when you don't have a predefined scheme
- Category Exploration: Analyze category stability and saturation through repeated raw extraction
With leading models like GPT-5, Gemini, and Qwen 3, CatLLM achieves 98% accuracy compared to human consensus on classification tasks.
Try the web app: https://huggingface.co/spaces/CatLLM/survey-classifier
Table of Contents
- Installation
- Quick Start
- Configuration
- Supported Models
- API Reference
- classify() - Unified function for text, image, and PDF (auto-detects input type)
- extract() - Unified function for category extraction
- explore() - Raw category extraction for saturation analysis
- summarize() - Unified function for text and PDF summarization
- image_score_drawing()
- image_features()
- cerad_drawn_score()
- Deprecated Functions
- Related Projects
- Academic Research
- Contributing & Support
- License
Installation
pip install cat-llm
For PDF support:
pip install cat-llm[pdf]
Note: Web data collection (
build_dataset_from_web) has been moved to its own package due to its different focus. Install it separately:pip install llm-web-researchSee llm-web-research for details.
Quick Start
This package is designed for building datasets at scale, not one-off queries. While you can categorize individual responses, its primary purpose is batch processing entire survey columns or image collections into structured research datasets.
Simply provide your survey responses and category list—the package handles the rest and outputs clean data ready for statistical analysis. It works with single or multiple categories per response and automatically skips missing data to save API costs.
Also supports image and PDF classification using the same methodology: extract features, count objects, identify categories, or determine presence of elements based on your research questions.
All outputs are formatted for immediate statistical analysis and can be exported directly to CSV.
Not to be confused with CAT-LLM for Chinese article‐style transfer (Tao et al. 2024).
Configuration
Get Your API Key
Get an API key from your preferred provider:
- OpenAI: platform.openai.com
- Anthropic: console.anthropic.com
- Google: aistudio.google.com
- Huggingface: huggingface.co/settings/tokens
- xAI: console.x.ai
- Mistral: console.mistral.ai
- Perplexity: perplexity.ai/settings/api
Most providers require adding a payment method and purchasing credits. Store your key securely and never share it publicly.
Supported Models
- OpenAI: GPT-4o, GPT-4, GPT-5, etc.
- Anthropic: Claude Sonnet 4, Claude 3.5 Sonnet, Claude Haiku, etc.
- Google: Gemini 2.5 Flash, Gemini 2.5 Pro, etc.
- Huggingface: Qwen, Llama 4, DeepSeek, and thousands of community models
- xAI: Grok models
- Mistral: Mistral Large, Pixtral, etc.
- Perplexity: Sonar Large, Sonar Small, etc.
Fully Tested:
- OpenAI (GPT-4, GPT-4o, GPT-5, etc.)
- Anthropic (Claude Sonnet 4, Claude 3.5 Sonnet, Haiku)
- Perplexity (Sonar models)
- Google Gemini - Free tier has severe rate limits (5 RPM). Requires Google AI Studio billing account for large-scale use.
- Huggingface - Access to Qwen, Llama 4, DeepSeek, and thousands of user-trained models for specific tasks. API routing can occasionally be unstable.
- xAI (Grok models)
- Mistral (Mistral Large, Pixtral, etc.)
Note: For best results, I recommend starting with OpenAI or Anthropic.
API Reference
classify()
Unified classification function for text, image, and PDF inputs. Input type is auto-detected from your data—no need to specify whether you're classifying text, images, or PDFs.
Supports both single-model and multi-model ensemble classification for improved accuracy through consensus voting.
Parameters:
input_data: The data to classify. Can be:- Text: list of strings or pandas Series
- Images: directory path, single file, or list of image paths
- PDFs: directory path, single file, or list of PDF paths
categories(list): List of category names for classificationapi_key(str): API key for the LLM service (single-model mode)description(str): Description of the input data contextuser_model(str, default="gpt-4o"): Model to usemode(str, default="image"): PDF processing mode - "image", "text", or "both"creativity(float, optional): Temperature setting (0.0-1.0)chain_of_thought(bool, default=True): Enable step-by-step reasoningfilename(str, optional): Output filename for CSVsave_directory(str, optional): Directory to save resultsmodel_source(str, default="auto"): Provider - "auto", "openai", "anthropic", "google", "mistral", "perplexity", "huggingface", "xai"models(list, optional): For multi-model ensemble, list of(model, provider, api_key)tuplesconsensus_threshold(float, default=0.5): Agreement threshold for ensemble mode (0-1)
Returns:
pandas.DataFrame: Classification results with category columns
Examples:
import catllm as cat
# Text classification (auto-detected)
results = cat.classify(
input_data=df['responses'],
categories=["Positive feedback", "Negative feedback", "Neutral"],
description="Customer satisfaction survey",
api_key=api_key
)
# Image classification (auto-detected from file paths)
results = cat.classify(
input_data="/path/to/images/",
categories=["Contains person", "Outdoor scene", "Has text"],
description="Product photos",
api_key=api_key
)
# PDF classification (auto-detected, processes each page separately)
results = cat.classify(
input_data="/path/to/reports/",
categories=["Contains table", "Has chart", "Is summary page"],
description="Financial reports",
mode="both", # Use both image and extracted text
api_key=api_key
)
# Multi-model ensemble for higher accuracy
results = cat.classify(
input_data=df['responses'],
categories=["Positive", "Negative", "Neutral"],
models=[
("gpt-4o", "openai", "sk-..."),
("claude-sonnet-4-5-20250929", "anthropic", "sk-ant-..."),
("gemini-2.5-flash", "google", "AIza..."),
],
consensus_threshold=0.5, # Majority vote
)
Multi-Model Ensemble:
When you provide the models parameter, CatLLM runs classification across multiple models in parallel and combines results using majority voting. This can significantly improve accuracy by reducing individual model biases.
The output includes:
- Individual model predictions (e.g.,
category_1_gpt_4o,category_1_claude) - Consensus columns (e.g.,
category_1_consensus) - Agreement scores showing how many models agreed
extract()
Unified category extraction function for text, image, and PDF inputs. Automatically discovers categories in your data when you don't have a predefined scheme.
Planned improvement: Allow specifying a separate, more powerful model for the semantic merge step (e.g., use GPT-4o-mini for bulk extraction, GPT-4o for the final consolidation). This "tiered" approach could improve merge quality without significantly increasing cost.
Parameters:
input_data: The data to explore (text list, image paths, or PDF paths)api_key(str): API key for the LLM serviceinput_type(str, default="text"): Type of input - "text", "image", or "pdf"description(str): Description of the input datamax_categories(int, default=12): Maximum number of categories to returncategories_per_chunk(int, default=10): Categories to extract per chunkdivisions(int, default=12): Number of chunks to divide data intoiterations(int, default=8): Number of extraction passes over the datauser_model(str, default="gpt-4o"): Model to usespecificity(str, default="broad"): "broad" or "specific" category granularityresearch_question(str, optional): Research context to guide extractionfocus(str, optional): Focus instruction for category extraction (e.g., "emotional responses")filename(str, optional): Output filename for CSV
Default parameter rationale: The defaults of
divisions=12anditerations=8were determined through empirical analysis. We ran a 6x6 grid search over [1, 4, 8, 12, 16, 20] for both parameters, repeating each combination 10 times and measuring pairwise Jaro-Winkler consistency across runs. Consistency peaked at 12 divisions and 8 iterations, with values beyond this point offering no meaningful improvement.
Returns:
dictwith keys:counts_df: DataFrame of categories with countstop_categories: List of top category namesraw_top_text: Raw model output
Example:
import catllm as cat
# Extract categories from survey responses
results = cat.extract(
input_data=df['responses'],
description="Why did you move?",
api_key=api_key,
max_categories=10,
focus="decisions to relocate" # Optional focus
)
print(results['top_categories'])
# ['Employment opportunity', 'Family reasons', 'Cost of living', ...]
explore()
Raw category extraction for frequency and saturation analysis. Unlike extract(), which normalizes, deduplicates, and semantically merges categories into a clean final set, explore() returns every category string from every chunk across every iteration — with duplicates intact.
This is useful for analyzing which categories are robust (consistently discovered across runs) versus which are noise (appearing only once or twice). By increasing iterations, you can build saturation curves showing when category discovery converges.
Parameters:
input_data: List of text responses or pandas Seriesapi_key(str): API key for the LLM servicedescription(str): The survey question or description of the datacategories_per_chunk(int, default=10): Categories to extract per chunkdivisions(int, default=12): Number of chunks to divide data intouser_model(str, default="gpt-4o"): Model to usecreativity(float, optional): Temperature setting (0.0-1.0)specificity(str, default="broad"): "broad" or "specific" category granularityresearch_question(str, optional): Research context to guide extractionfocus(str, optional): Focus instruction (e.g., "decisions to relocate")iterations(int, default=8): Number of passes over the datarandom_state(int, optional): Random seed for reproducibilityfilename(str, optional): Output CSV filename (one category per row)
Returns:
list[str]: Every category extracted from every chunk across every iteration. Length ≈iterations × divisions × categories_per_chunk.
Example:
import catllm as cat
# Run extraction with many iterations for saturation analysis
raw_categories = cat.explore(
input_data=df['responses'],
description="Why did you move?",
api_key=api_key,
iterations=20,
divisions=5,
categories_per_chunk=10,
)
# Count how often each category appears across runs
from collections import Counter
counts = Counter(raw_categories)
for category, freq in counts.most_common(15):
print(f"{freq:3d}x {category}")
summarize()
Unified summarization function for text and PDF inputs. Generates concise summaries of survey responses, documents, or any text data. Input type is auto-detected from your data.
Supports both single-model and multi-model ensemble summarization. In multi-model mode, summaries from all models are synthesized into a consensus summary.
Parameters:
input_data: The data to summarize. Can be:- Text: list of strings, pandas Series, or single string
- PDF: directory path, single PDF path, or list of PDF paths
api_key(str): API key for the LLM service (single-model mode)description(str): Description of what the content contains (provides context)instructions(str): Specific summarization instructions (e.g., "bullet points")max_length(int): Maximum summary length in wordsfocus(str): What to focus on (e.g., "main arguments", "emotional content")user_model(str, default="gpt-4o"): Model to usemodel_source(str, default="auto"): Provider - "auto", "openai", "anthropic", "google", etc.mode(str, default="image"): PDF processing mode:- "image": Render pages as images (best for visual documents)
- "text": Extract text only (faster, good for text-heavy PDFs)
- "both": Send both image and extracted text (most comprehensive)
filename(str): Output CSV filenamesave_directory(str): Directory to save resultsmodels(list): For multi-model mode, list of(model, provider, api_key)tuples
Returns:
pandas.DataFrame: Results with summary columns:survey_input: Original text or page label (for PDFs)summary: Generated summary (or consensus for multi-model)processing_status: "success", "error", "skipped"pdf_path: Path to source PDF (PDF mode only)page_index: Page number, 0-indexed (PDF mode only)
Examples:
import catllm as cat
# Single model text summarization
results = cat.summarize(
input_data=df['responses'],
description="Customer feedback",
api_key=api_key
)
# PDF summarization (auto-detected from file paths)
results = cat.summarize(
input_data="/path/to/pdfs/",
description="Research papers",
mode="image",
api_key=api_key
)
# PDF summarization with specific files and focus
results = cat.summarize(
input_data=["doc1.pdf", "doc2.pdf"],
description="Financial reports",
mode="both",
focus="key metrics and trends",
max_length=100,
api_key=api_key
)
# Multi-model with synthesis
results = cat.summarize(
input_data=df['responses'],
models=[
("gpt-4o", "openai", "sk-..."),
("claude-sonnet-4-5-20250929", "anthropic", "sk-ant-..."),
],
)
image_score_drawing()
Performs quality scoring of images against a reference description and optional reference image, returning structured results with optional CSV export.
Methodology: Processes each image individually, assigning a drawing quality score on a 5-point scale based on similarity to the expected description:
- 1: No meaningful similarity (fundamentally different)
- 2: Barely recognizable similarity (25% match)
- 3: Partial match (50% key features)
- 4: Strong alignment (75% features)
- 5: Near-perfect match (90%+ similarity)
Parameters:
reference_image_description(str): A description of what the model should expect to seeimage_input(list): List of image file paths or folder path containing imagesreference_image(str): A file path to the reference imageapi_key(str): API key for the LLM serviceuser_model(str, default="gpt-4o"): Specific vision model to usecreativity(float, default=0): Temperature/randomness setting (0.0-1.0)safety(bool, default=False): Enable safety checks and save results at each API call stepfilename(str, default="image_scores.csv"): Filename for CSV outputsave_directory(str, optional): Directory path to save the CSV filemodel_source(str, default="OpenAI"): Model provider
Returns:
pandas.DataFrame: DataFrame with image paths, quality scores, and analysis details
Example:
import catllm as cat
image_scores = cat.image_score_drawing(
reference_image_description='A hand-drawn circle',
image_input=['image1.jpg', 'image2.jpg', 'image3.jpg'],
user_model="gpt-4o",
api_key="OPENAI_API_KEY"
)
image_features()
Extracts specific features and attributes from images, returning exact answers to user-defined questions (e.g., counts, colors, presence of objects).
Methodology: Processes each image individually using vision models to extract precise information about specified features. Unlike scoring and classification functions, this returns factual data such as object counts, color identification, or presence/absence of specific elements.
Parameters:
image_description(str): A description of what the model should expect to seeimage_input(list): List of image file paths or folder path containing imagesfeatures_to_extract(list): List of specific features to extract (e.g., ["number of people", "primary color", "contains text"])api_key(str): API key for the LLM serviceuser_model(str, default="gpt-4o"): Specific vision model to usecreativity(float, default=0): Temperature/randomness setting (0.0-1.0)to_csv(bool, default=False): Whether to save the output to a CSV filesafety(bool, default=False): Enable safety checks and save results at each API call stepfilename(str, default="categorized_data.csv"): Filename for CSV outputsave_directory(str, optional): Directory path to save the CSV filemodel_source(str, default="OpenAI"): Model provider
Returns:
pandas.DataFrame: DataFrame with image paths and extracted feature values
Example:
import catllm as cat
features = cat.image_features(
image_description='Product photos from e-commerce site',
features_to_extract=['number of items', 'primary color', 'has price tag'],
image_input='/path/to/images/',
user_model="gpt-4o",
api_key="OPENAI_API_KEY"
)
cerad_drawn_score()
Automatically scores drawings of circles, diamonds, overlapping rectangles, and cubes according to the official Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scoring system.
Methodology: Processes each image individually, evaluating the drawn shapes based on CERAD criteria. Works even with images that contain other drawings or writing.
Parameters:
shape(str): The type of shape to score ("circle", "diamond", "rectangles", "cube")image_input(list): List of image file paths or folder path containing imagesapi_key(str): API key for the LLM serviceuser_model(str, default="gpt-4o"): Specific model to usecreativity(float, default=0): Temperature/randomness setting (0.0-1.0)safety(bool, default=False): Enable safety checks and save results at each API call stepfilename(str, optional): Filename for CSV outputmodel_source(str, default="auto"): Model provider
Returns:
pandas.DataFrame: DataFrame with image paths, CERAD scores, and analysis details
Example:
import catllm as cat
diamond_scores = cat.cerad_drawn_score(
shape="diamond",
image_input=df['diamond_pic_path'],
api_key=api_key,
safety=True,
filename="diamond_scores.csv",
)
Deprecated Functions
The following functions are deprecated and will be removed in a future version. Please use classify() instead, which auto-detects input type and supports all the same features.
| Deprecated Function | Replacement |
|---|---|
multi_class() |
classify(input_data=texts, ...) |
image_multi_class() |
classify(input_data=images, ...) |
pdf_multi_class() |
classify(input_data=pdfs, ...) |
explore_corpus() |
extract(input_data=texts, ...) |
explore_common_categories() |
extract(input_data=texts, ...) |
These functions still work but will show deprecation warnings. Migration is straightforward—simply use classify() with your data and it will automatically detect whether you're passing text, images, or PDFs.
Related Projects
Looking for web research capabilities? Check out llm-web-research - a precision-focused LLM-powered web research tool that uses a novel Funnel of Verification (FoVe) methodology to reduce false positives. It's designed for use cases where accuracy matters more than completeness.
pip install llm-web-research
Academic Research
This package implements methodology from research on LLM performance in social science applications, including the UC Berkeley Social Networks Study. The package addresses reproducibility challenges in LLM-assisted research by providing standardized interfaces and consistent output formatting.
If you use this package for research, please cite:
Soria, C. (2025). CatLLM (0.1.0). Zenodo. https://doi.org/10.5281/zenodo.15532317
Contributing & Support
Contributions are welcome! Please see CONTRIBUTING.md for detailed guidelines.
- Report bugs or request features: Open a GitHub Issue
- Ask questions or get help: GitHub Discussions or Issues
- Contribute code: Fork the repo, create a branch, and submit a pull request — see CONTRIBUTING.md
- Research collaboration: Email ChrisSoria@Berkeley.edu
License
cat-llm is distributed under the terms of the GNU license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cat_llm-2.3.2.tar.gz.
File metadata
- Download URL: cat_llm-2.3.2.tar.gz
- Upload date:
- Size: 424.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d570fa48556ac22b0d446f64a9b0d5761e23421d533d7766e5e14528cbb113dd
|
|
| MD5 |
3869f4a865cfe716b840b0d91b32215c
|
|
| BLAKE2b-256 |
d47ae216ec74c6a79a4089911b957e19aebb4a7d8fe5572f96662c10526cae51
|
File details
Details for the file cat_llm-2.3.2-py3-none-any.whl.
File metadata
- Download URL: cat_llm-2.3.2-py3-none-any.whl
- Upload date:
- Size: 442.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4676f4924d0a1b16082b58eddb68b25733db643ee83c2b1bcf03a24afb810dae
|
|
| MD5 |
c3464aa759236b895a0502179efb9ad3
|
|
| BLAKE2b-256 |
66ccc301d5394365cef7ac24237652ab9b68ab938c957f3edfe662316398fc93
|