A tool for classifying and analyzing social media data using LLMs and vision models
Project description
cat-vader
CatVader: An AI Pipeline for Classifying and Exploring Social Media Data
The Problem
Social media data is messy and vast: millions of posts, comments, and threads that need to be categorized before any quantitative analysis can begin. Manual coding doesn't scale, and generic text classifiers miss the nuance of platform-specific language, hashtags, and metadata.
The Solution
CatVader is a Python package designed specifically for social media research that uses LLMs to automate the classification and exploration of posts, comments, and threads. It handles both:
- Category Assignment: Classify posts into your predefined categories (multi-label supported)
- Category Extraction: Automatically discover and extract categories from your data when you don't have a predefined scheme
- Category Exploration: Analyze category stability and saturation through repeated raw extraction
Social media context (platform, author handle, hashtags, engagement metrics) can be injected directly into the classification prompt for richer, more accurate results.
Table of Contents
- Installation
- Quick Start
- Best Practices for Classification
- Configuration
- Supported Models
- API Reference
- classify() - Unified function for text, image, and PDF (auto-detects input type)
- extract() - Unified function for category extraction
- explore() - Raw category extraction for saturation analysis
- image_features()
- Deprecated Functions
- Contributing & Support
- License
Installation
pip install cat-vader
For PDF support:
pip install cat-vader[pdf]
Quick Start
This package is designed for building datasets at scale, not one-off queries. Its primary purpose is batch processing entire social media datasets into structured, analysis-ready DataFrames.
Simply provide your posts and category list — the package handles the rest and outputs clean data ready for statistical analysis. It works with single or multiple categories per post and automatically skips missing data to save API costs.
Also supports image and PDF classification using the same methodology.
All outputs are formatted for immediate statistical analysis and can be exported directly to CSV.
Best Practices for Classification
These recommendations are based on empirical testing across multiple tasks and models (7B to frontier-class).
What works
- Detailed category descriptions: The single biggest lever for accuracy. Instead of short labels like
"Anger", use verbose descriptions like"Post expresses anger or frustration toward a person, institution, or situation."This consistently improves accuracy across all models. - Include an "Other" category: Adding a catch-all category prevents the model from forcing ambiguous posts into ill-fitting categories, improving precision.
- Low temperature (
creativity=0): For classification tasks, deterministic output is generally preferable. - Social media context fields (
platform,handle,hashtags): Providing platform context helps the model interpret slang, tone, and conventions accurately.
What doesn't help (or hurts)
- Chain of Thought (
chain_of_thought): In testing, enabling CoT did not improve classification accuracy and slightly degraded it for some models. - Step-back prompting (
step_back_prompt): Results are inconsistent — slight gains for weaker models but slight losses for stronger models. - Context prompting (
context_prompt): Adds generic expert context. No consistent benefit observed.
Summary
The most effective approach: write detailed category descriptions, include an "Other" category, provide social media context, and use a capable model at low temperature.
Configuration
Get Your API Key
Get an API key from your preferred provider:
- OpenAI: platform.openai.com
- Anthropic: console.anthropic.com
- Google: aistudio.google.com
- Huggingface: huggingface.co/settings/tokens
- xAI: console.x.ai
- Mistral: console.mistral.ai
- Perplexity: perplexity.ai/settings/api
Supported Models
- OpenAI: GPT-4o, GPT-4, GPT-5, etc.
- Anthropic: Claude Sonnet 4, Claude 3.5 Sonnet, Claude Haiku, etc.
- Google: Gemini 2.5 Flash, Gemini 2.5 Pro, etc.
- Huggingface: Qwen, Llama 4, DeepSeek, and thousands of community models
- xAI: Grok models
- Mistral: Mistral Large, Pixtral, etc.
- Perplexity: Sonar Large, Sonar Small, etc.
API Reference
classify()
Unified classification function for text, image, and PDF inputs. Input type is auto-detected from your data.
Supports both single-model and multi-model ensemble classification for improved accuracy through consensus voting.
Parameters:
input_data: The data to classify (text list/Series, image paths, or PDF paths). Omit when usingsm_source.categories(list): List of category names for classification. Use"auto"to discover categories automatically.api_key(str): API key for the LLM service (single-model mode)sm_source(str, optional): Social media platform to pull posts from automatically (e.g.,"threads"). When set,input_datais fetched and does not need to be provided.sm_limit(int, default=50): Number of posts to fetch when usingsm_source.sm_credentials(dict, optional): Platform credentials (e.g.,{"access_token": "...", "user_id": "..."}). Falls back to env vars.platform(str, optional): Social media platform label (e.g.,"Twitter/X","Reddit","TikTok"). Injected into the classification prompt as context.handle(str, optional): Author handle (e.g.,"@username","r/subreddit"). Injected into prompt.hashtags(str or list, optional): Hashtags associated with the posts. Injected into prompt.post_metadata(dict, optional): Additional post metadata injected into prompt (e.g.,{"avg_likes": 1200}).description(str): Additional context about the input data.feed_question(str, default=""): Context describing what to look for in the feed. Used whencategories="auto". Whensm_sourceis set and this is omitted, defaults to"What topics are discussed in these social media posts?".user_model(str, default="gpt-5"): Model to usemode(str, default="image"): PDF processing mode —"image","text", or"both"creativity(float, optional): Temperature setting (0.0–1.0)chain_of_thought(bool, default=False): Enable step-by-step reasoningstep_back_prompt(bool, default=False): Enable step-back prompting (results inconsistent — see Best Practices)context_prompt(bool, default=False): Add generic expert context to prompts (no consistent benefit observed)filename(str, optional): Output filename for CSVsave_directory(str, optional): Directory to save resultsmodel_source(str, default="auto"): Provider —"auto","openai","anthropic","google","mistral","perplexity","huggingface","xai"models(list, optional): For multi-model ensemble, list of(model, provider, api_key)tuplesconsensus_threshold(str or float, default="majority"): Agreement threshold for ensemble modethinking_budget(int, default=0): Token budget for model reasoning/thinking. Behavior varies by provider:
| Provider | thinking_budget=0 |
thinking_budget > 0 |
|---|---|---|
| OpenAI | reasoning_effort="minimal" |
reasoning_effort="high" |
| Anthropic | Thinking disabled | Extended thinking enabled (min 1024 tokens) |
| Thinking disabled | thinkingConfig: {thinkingBudget: N} |
Returns:
pandas.DataFrame: Classification results with category columns
Examples:
import catvader as cat
# Basic text classification
results = cat.classify(
input_data=df['post_text'],
categories=["Positive sentiment", "Negative sentiment", "Neutral"],
description="Twitter posts about a product launch",
api_key=api_key
)
# Pull directly from Threads — no input_data needed
results = cat.classify(
sm_source="threads",
sm_limit=100,
categories=["Opinion/Commentary", "News/Information", "Humor/Satire", "Other"],
platform="Threads",
api_key=api_key
)
# With social media context injected into prompt
results = cat.classify(
input_data=df['post_text'],
categories=["Misinformation", "Opinion", "Factual", "Satire"],
platform="Twitter/X",
hashtags=["#Election2024", "#Politics"],
post_metadata={"avg_likes": 450, "avg_shares": 120},
api_key=api_key
)
# Auto-discover categories from a feed (categories="auto")
results = cat.classify(
sm_source="threads",
sm_limit=50,
categories="auto",
feed_question="What topics and themes appear in these posts?",
api_key=api_key
)
# Image classification (auto-detected from file paths)
results = cat.classify(
input_data="/path/to/images/",
categories=["Contains person", "Outdoor scene", "Has text"],
description="Instagram post images",
api_key=api_key
)
# Multi-model ensemble for higher accuracy
results = cat.classify(
input_data=df['post_text'],
categories=["Hate speech", "Harassment", "Safe content"],
models=[
("gpt-5", "openai", "sk-..."),
("claude-sonnet-4-5-20250929", "anthropic", "sk-ant-..."),
("gemini-2.5-flash", "google", "AIza..."),
],
consensus_threshold="majority",
)
Multi-Model Ensemble:
When you provide the models parameter, CatVader runs classification across multiple models in parallel and combines results using majority voting. The output includes:
- Individual model predictions (e.g.,
category_1_gpt_4o,category_1_claude) - Consensus columns (e.g.,
category_1_consensus) - Agreement scores showing how many models agreed
extract()
Unified category extraction function for text, image, and PDF inputs. Automatically discovers categories in your data when you don't have a predefined scheme.
Parameters:
input_data: The data to explore (text list, image paths, or PDF paths)api_key(str): API key for the LLM serviceinput_type(str, default="text"): Type of input —"text","image", or"pdf"description(str): Description of the input dataplatform(str, optional): Social media platform contexthandle(str, optional): Author handle contexthashtags(str or list, optional): Hashtag contextpost_metadata(dict, optional): Additional metadata contextmax_categories(int, default=12): Maximum number of categories to returncategories_per_chunk(int, default=10): Categories to extract per chunkdivisions(int, default=12): Number of chunks to divide data intoiterations(int, default=8): Number of extraction passes over the datauser_model(str, default="gpt-5"): Model to usespecificity(str, default="broad"):"broad"or"specific"category granularityresearch_question(str, optional): Research context to guide extractionfocus(str, optional): Focus instruction (e.g.,"emotional tone","political stance")filename(str, optional): Output filename for CSV
Returns:
dictwith keys:counts_df: DataFrame of categories with countstop_categories: List of top category namesraw_top_text: Raw model output
Example:
import catvader as cat
# Discover categories in Reddit posts
results = cat.extract(
input_data=df['comment_text'],
description="r/technology comments about AI",
platform="Reddit",
api_key=api_key,
max_categories=10,
focus="concerns and criticisms"
)
print(results['top_categories'])
# ['Privacy concerns', 'Job displacement', 'Bias in AI', ...]
explore()
Raw category extraction for frequency and saturation analysis. Unlike extract(), which normalizes and merges categories, explore() returns every category string from every chunk across every iteration — with duplicates intact.
Parameters:
input_data: List of text responses or pandas Seriesapi_key(str): API key for the LLM servicedescription(str): Description of the dataplatform(str, optional): Social media platform contexthandle(str, optional): Author handle contexthashtags(str or list, optional): Hashtag contextpost_metadata(dict, optional): Additional metadata contextcategories_per_chunk(int, default=10): Categories to extract per chunkdivisions(int, default=12): Number of chunks to divide data intouser_model(str, default="gpt-5"): Model to usecreativity(float, optional): Temperature settingspecificity(str, default="broad"):"broad"or"specific"research_question(str, optional): Research contextfocus(str, optional): Focus instructioniterations(int, default=8): Number of passes over the datarandom_state(int, optional): Random seed for reproducibilityfilename(str, optional): Output CSV filename
Returns:
list[str]: Every category extracted from every chunk across every iteration.
Example:
import catvader as cat
# Run many iterations for saturation analysis
raw_categories = cat.explore(
input_data=df['post_text'],
description="TikTok comments on a viral video",
platform="TikTok",
api_key=api_key,
iterations=20,
divisions=5,
categories_per_chunk=10,
)
from collections import Counter
counts = Counter(raw_categories)
for category, freq in counts.most_common(15):
print(f"{freq:3d}x {category}")
image_features()
Extracts specific features and attributes from images, returning exact answers to user-defined questions.
Parameters:
image_description(str): A description of what the model should expect to seeimage_input(list): List of image file paths or folder pathfeatures_to_extract(list): Features to extract (e.g.,["number of people", "primary color"])api_key(str): API key for the LLM serviceuser_model(str, default="gpt-5"): Specific vision model to usecreativity(float, default=0): Temperature settingfilename(str, default="categorized_data.csv"): Filename for CSV outputsave_directory(str, optional): Directory path to save the CSV filemodel_source(str, default="OpenAI"): Model provider
Returns:
pandas.DataFrame: DataFrame with image paths and extracted feature values
Example:
import catvader as cat
features = cat.image_features(
image_description='Social media post screenshots',
features_to_extract=['number of hashtags', 'contains image', 'estimated likes'],
image_input='/path/to/screenshots/',
user_model="gpt-5",
api_key=api_key
)
Deprecated Functions
The following functions are deprecated and will be removed in a future version. Please use classify() instead.
| Deprecated Function | Replacement |
|---|---|
multi_class() |
classify(input_data=texts, ...) |
image_multi_class() |
classify(input_data=images, ...) |
pdf_multi_class() |
classify(input_data=pdfs, ...) |
explore_corpus() |
extract(input_data=texts, ...) |
explore_common_categories() |
extract(input_data=texts, ...) |
Contributing & Support
Contributions are welcome!
- Report bugs or request features: Open a GitHub Issue
- Ask questions or get help: GitHub Discussions
- Research collaboration: Email ChrisSoria@Berkeley.edu
License
cat-vader is distributed under the terms of the GNU GPL-3.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cat_vader-1.1.0.tar.gz.
File metadata
- Download URL: cat_vader-1.1.0.tar.gz
- Upload date:
- Size: 406.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d707d025a997cfa32fa7f8dd69699b75c17f9b600e3c8568578321f2a5577c00
|
|
| MD5 |
06bb558d9968ba49e83bd7cac0b130b0
|
|
| BLAKE2b-256 |
b581c045bcfea1b7acce99a2eac606cd1ebde80d3856172cf84ebac977569b0d
|
File details
Details for the file cat_vader-1.1.0-py3-none-any.whl.
File metadata
- Download URL: cat_vader-1.1.0-py3-none-any.whl
- Upload date:
- Size: 424.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ac03ba993963e9ec676e5214067740477d1fc4b2a81822a25cdf1de3dc691a3
|
|
| MD5 |
2ad8a62dfa9d8b35a89f195af712504f
|
|
| BLAKE2b-256 |
3f64a6894b7a28c7a247c183a08759a4527026a79ce1519acedd7eea10e31f8a
|