A modern sentiment analysis library with optional LLM support
Project description
🎭 Sentimetric - Modern Sentiment Analysis
Sentimetric is a modern, fast, and accurate sentiment analysis library with optional LLM support for complex emotions, sarcasm, and nuanced context.
✨ Features
- 🚀 Fast Rule-Based Analysis
- 🧠 Multi-LLM Support (OpenAI, Google Gemini, Anthropic Claude, Cohere, Hugging Face)
- 💰 Cost-Aware Model Selection with automatic fallback to cheaper models
- 📊 Batch Processing with parallel execution
- 🎯 High Accuracy with modern slang & emojis
- 🔧 Simple API:
from sentimetric import analyze
🚀 Quick Start
Installation
pip install sentimetric
Basic Usage
from sentimetric import analyze
# Quick analysis
result = analyze("This is amazing!")
print(result)
# Example output: SentimentResult(polarity=+0.90, category='positive', confidence=0.85)
Multi-LLM Usage
Sentimetric now supports multiple LLM providers! Choose from OpenAI, Google Gemini, Anthropic Claude, Cohere, or Hugging Face.
Installation with LLM Support
# Install with all LLM providers
pip install "sentimetric[all]"
# Or install specific providers
pip install "sentimetric[openai]" # For OpenAI
pip install "sentimetric[google]" # For Google Gemini
pip install "sentimetric[anthropic]" # For Anthropic Claude
pip install "sentimetric[cohere]" # For Cohere
pip install "sentimetric[huggingface]" # For Hugging Face
Basic LLM Usage
from sentimetric import LLMAnalyzer
# Auto-selects the best available provider
analyzer = LLMAnalyzer() # Automatically detects available API keys
# Or specify a provider
analyzer = LLMAnalyzer(provider="openai", model="gpt-3.5-turbo")
# analyzer = LLMAnalyzer(provider="google", model="gemini-1.5-flash")
# analyzer = LLMAnalyzer(provider="anthropic", model="claude-3-haiku-20240307")
# analyzer = LLMAnalyzer(provider="cohere", model="command")
# analyzer = LLMAnalyzer(provider="huggingface", model="mistralai/Mixtral-8x7B-Instruct-v0.1")
result = analyzer.analyze("Oh great, another bug 🙄")
print(result.category) # 'negative' (catches sarcasm)
print(result.reasoning) # Explanation from the LLM
Environment Variables
Set your API keys as environment variables:
# OpenAI
export OPENAI_API_KEY="your-openai-key"
# Google Gemini
export GOOGLE_API_KEY="your-google-key"
# Anthropic Claude
export ANTHROPIC_API_KEY="your-anthropic-key"
# Cohere
export COHERE_API_KEY="your-cohere-key"
# Hugging Face
export HUGGINGFACE_API_KEY="your-hf-key"
Cost-Aware Features
# Auto-select cheapest model
analyzer = LLMAnalyzer(provider="openai", model="auto") # Uses gpt-3.5-turbo
# Fallback to cheaper models on failure
analyzer = LLMAnalyzer(
provider="openai",
model="gpt-4",
fallback_to_cheaper=True # Falls back to gpt-3.5-turbo if gpt-4 fails
)
📚 Examples
See examples.py for comprehensive usage examples. Use python examples.py to run them locally.
🛠️ API Reference
Core Functions
analyze(text, method='auto')- Quick sentiment analysisanalyze_batch(texts, method='rule')- Batch sentiment analysiscompare_methods(text, api_key=None)- Compare rule-based vs LLM analysis
Classes
Analyzer- Fast rule-based sentiment analyzerLLMAnalyzer- Multi-provider LLM analyzer (OpenAI, Google, Anthropic, Cohere, Hugging Face)SentimentResult- Result container with polarity, category, confidence, reasoning, emotions, toneBenchmark- Accuracy testing and comparison utilities
LLMAnalyzer Constructor Parameters
provider- LLM provider ('openai', 'google', 'anthropic', 'cohere', 'huggingface', or 'auto')model- Model name or 'auto' for cheapest availableapi_key- API key (optional, uses environment variables)fallback_to_cheaper- Whether to fall back to cheaper models if requested model fails (default: True)
📞 Support
- Author: Abel Peter
- Email: peterabel791@gmail.com
- Issues: https://github.com/peter-abel/sentimetric/issues
Made with ❤️ by Abel Peter
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sentimetric-1.0.2.tar.gz.
File metadata
- Download URL: sentimetric-1.0.2.tar.gz
- Upload date:
- Size: 17.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c9929ce0d79e73ab7220e7269524b09bcd65b102fc38b26a9c0e56d4924755ed
|
|
| MD5 |
e3cc696abd4522aa9fc2ffacd24644d5
|
|
| BLAKE2b-256 |
da18c66dbe4deae4035074bf982773bba7b56c2ce8324e0a65e24f630d0e43f6
|
File details
Details for the file sentimetric-1.0.2-py3-none-any.whl.
File metadata
- Download URL: sentimetric-1.0.2-py3-none-any.whl
- Upload date:
- Size: 14.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e7112532194c231b939ec43f997b443c331ab88ba2c2dfcbd7ef912f52fc6ab
|
|
| MD5 |
e69517a8fee3d322e5e3336f72be58d0
|
|
| BLAKE2b-256 |
75b6bbd8bd903d508f26e03323a1ce23bbd36f515604462cda236497c6796676
|