Enterprise-grade text classification library with 11+ LLM providers, streaming, monitoring, and advanced workflows
Project description
Evolvishub Text Classification LLM
Enterprise-grade text classification library with 11+ LLM providers, streaming, monitoring, and advanced workflows
Download Statistics
Overview
Evolvishub Text Classification LLM is a comprehensive, enterprise-ready Python library designed for production-scale text classification tasks. Built by Evolvis AI, this proprietary solution provides seamless integration with 11+ leading LLM providers, advanced monitoring capabilities, and professional-grade architecture suitable for mission-critical applications.
Key Features
Core Capabilities
- 11+ LLM Providers: OpenAI, Anthropic, Google, Cohere, Mistral, Replicate, HuggingFace, Azure OpenAI, AWS Bedrock, Ollama, and Custom providers
- Streaming Support: Real-time text generation with WebSocket support
- Async/Await: Full asynchronous support for high-performance applications
- Batch Processing: Efficient processing of large datasets with configurable concurrency
- Smart Caching: Semantic caching with Redis and in-memory options
- Comprehensive Monitoring: Built-in health checks, metrics collection, and observability
- Enterprise Security: Authentication, rate limiting, and audit logging
- Workflow Templates: Pre-built workflows for common classification scenarios
Advanced Features
- Provider Fallback: Automatic failover between providers for reliability
- Cost Optimization: Intelligent routing based on cost and performance metrics
- Fine-tuning Support: Custom model training and deployment capabilities
- Multimodal Support: Text, image, and document processing
- LangGraph Integration: Complex workflow orchestration
- Real-time Streaming: WebSocket-based real-time classification
Installation
Basic Installation
pip install evolvishub-text-classification-llm
Provider-Specific Installation
# Install with specific providers
pip install evolvishub-text-classification-llm[openai,anthropic]
# Install with cloud providers
pip install evolvishub-text-classification-llm[azure_openai,aws_bedrock]
# Install with local inference
pip install evolvishub-text-classification-llm[huggingface,ollama]
# Full installation (all providers)
pip install evolvishub-text-classification-llm[all]
Development Installation
pip install evolvishub-text-classification-llm[dev]
Quick Start
Basic Classification
import asyncio
from evolvishub_text_classification_llm import create_engine
from evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType, WorkflowConfig
# Configure your workflow
config = WorkflowConfig(
name="sentiment_analysis",
description="Analyze sentiment of customer reviews",
providers=[
ProviderConfig(
provider_type=ProviderType.OPENAI,
api_key="your-openai-api-key",
model="gpt-4",
max_tokens=150,
temperature=0.1
)
]
)
async def main():
engine = create_engine(config)
result = await engine.classify(
text="This product is absolutely amazing! I love it.",
categories=["positive", "negative", "neutral"]
)
print(f"Classification: {result.category}")
print(f"Confidence: {result.confidence}")
asyncio.run(main())
Provider Configuration
OpenAI GPT Models
from evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType
openai_config = ProviderConfig(
provider_type=ProviderType.OPENAI,
api_key="your-openai-api-key",
model="gpt-4",
max_tokens=150,
temperature=0.1,
timeout_seconds=30
)
Anthropic Claude
anthropic_config = ProviderConfig(
provider_type=ProviderType.ANTHROPIC,
api_key="your-anthropic-api-key",
model="claude-3-sonnet-20240229",
max_tokens=150,
temperature=0.1
)
Google Gemini
google_config = ProviderConfig(
provider_type=ProviderType.GOOGLE,
api_key="your-google-api-key",
model="gemini-pro",
max_tokens=150,
temperature=0.1
)
Cohere
cohere_config = ProviderConfig(
provider_type=ProviderType.COHERE,
api_key="your-cohere-api-key",
model="command",
max_tokens=150,
temperature=0.1
)
Azure OpenAI
azure_config = ProviderConfig(
provider_type=ProviderType.AZURE_OPENAI,
api_key="your-azure-api-key",
azure_endpoint="https://your-resource.openai.azure.com/",
api_version="2024-02-15-preview",
deployment_name="gpt-4",
max_tokens=150
)
AWS Bedrock
bedrock_config = ProviderConfig(
provider_type=ProviderType.AWS_BEDROCK,
aws_access_key_id="your-access-key",
aws_secret_access_key="your-secret-key",
aws_region="us-east-1",
model="anthropic.claude-3-sonnet-20240229-v1:0",
max_tokens=150
)
HuggingFace Transformers
huggingface_config = ProviderConfig(
provider_type=ProviderType.HUGGINGFACE,
model="microsoft/DialoGPT-medium",
device="cuda", # or "cpu"
max_tokens=150,
temperature=0.1,
load_in_8bit=True # For memory optimization
)
Ollama (Local Inference)
ollama_config = ProviderConfig(
provider_type=ProviderType.OLLAMA,
base_url="http://localhost:11434",
model="llama2",
max_tokens=150,
temperature=0.1
)
Mistral AI
mistral_config = ProviderConfig(
provider_type=ProviderType.MISTRAL,
api_key="your-mistral-api-key",
model="mistral-large-latest",
max_tokens=150,
temperature=0.1
)
Replicate
replicate_config = ProviderConfig(
provider_type=ProviderType.REPLICATE,
api_key="your-replicate-api-key",
model="meta/llama-2-70b-chat",
max_tokens=150,
temperature=0.1
)
Custom Provider
custom_config = ProviderConfig(
provider_type=ProviderType.CUSTOM,
api_key="your-api-key",
base_url="https://your-api-endpoint.com",
model="your-model",
request_format="openai", # or "anthropic", "custom"
response_format="openai",
max_tokens=150
)
Batch Processing
from evolvishub_text_classification_llm import create_batch_processor
async def batch_example():
processor = create_batch_processor(config)
texts = [
"This is great!",
"I hate this product.",
"It's okay, nothing special."
]
results = await processor.process_batch(
texts=texts,
categories=["positive", "negative", "neutral"],
batch_size=10,
max_workers=4
)
for text, result in zip(texts, results):
print(f"'{text}' -> {result.category} ({result.confidence:.2f})")
Monitoring and Health Checks
from evolvishub_text_classification_llm import HealthChecker, MetricsCollector
async def monitoring_example():
# Health monitoring
health_checker = HealthChecker()
health_checker.register_provider("openai", openai_provider)
health_status = await health_checker.perform_health_check()
print(f"System health: {health_status.overall_status}")
# Metrics collection
metrics = MetricsCollector()
metrics.record_counter("requests_total", 1)
metrics.record_histogram("response_time_ms", 150.5)
# Export metrics
prometheus_metrics = metrics.export_metrics("prometheus")
print(prometheus_metrics)
Streaming
async def streaming_example():
engine = create_engine(config)
async for chunk in engine.classify_stream(
text="Analyze this long document...",
categories=["technical", "business", "personal"]
):
print(f"Partial result: {chunk}")
Configuration
Environment Variables
# Provider API Keys
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
export COHERE_API_KEY="your-cohere-key"
# Optional: Redis for caching
export REDIS_URL="redis://localhost:6379"
# Optional: Monitoring
export PROMETHEUS_PORT="8000"
Advanced Configuration
from evolvishub_text_classification_llm.core.config import LibraryConfig
config = LibraryConfig(
# Caching
enable_caching=True,
cache_backend="redis",
cache_ttl_seconds=3600,
# Monitoring
enable_monitoring=True,
metrics_port=8000,
health_check_interval=60,
# Performance
max_concurrent_requests=100,
request_timeout_seconds=30,
# Security
enable_audit_logging=True,
rate_limit_requests_per_minute=1000
)
Advanced Features
Provider Fallback
# Configure multiple providers with fallback
config = WorkflowConfig(
name="robust_classification",
providers=[
ProviderConfig(provider_type=ProviderType.OPENAI, priority=1),
ProviderConfig(provider_type=ProviderType.ANTHROPIC, priority=2),
ProviderConfig(provider_type=ProviderType.COHERE, priority=3)
],
fallback_enabled=True
)
Cost Optimization
# Optimize for cost vs performance
config = WorkflowConfig(
name="cost_optimized",
optimization_strategy="cost", # or "performance", "balanced"
max_cost_per_request=0.01
)
Custom Workflows
from evolvishub_text_classification_llm import WorkflowBuilder
builder = WorkflowBuilder()
workflow = (builder
.add_preprocessing("clean_text")
.add_classification("sentiment")
.add_postprocessing("confidence_threshold", min_confidence=0.8)
.build())
result = await workflow.execute("Your text here")
Troubleshooting
Common Issues
1. Provider Authentication Errors
# Verify API keys are set correctly
from evolvishub_text_classification_llm import ProviderFactory
if not ProviderFactory.is_provider_available("openai"):
print("OpenAI provider not available - check API key")
2. Rate Limiting
# Configure rate limiting and retries
config = ProviderConfig(
provider_type=ProviderType.OPENAI,
rate_limit_requests_per_minute=60,
max_retries=3,
retry_delay_seconds=1
)
3. Memory Issues with Large Batches
# Process in smaller chunks
processor = create_batch_processor(config)
results = await processor.process_batch(
texts=large_text_list,
batch_size=10, # Reduce batch size
max_workers=2 # Reduce concurrency
)
Performance Optimization
1. Enable Caching
# Redis caching for better performance
config.enable_caching = True
config.cache_backend = "redis"
config.cache_ttl_seconds = 3600
2. Use Appropriate Models
# For simple tasks, use faster models
fast_config = ProviderConfig(
provider_type=ProviderType.OPENAI,
model="gpt-3.5-turbo", # Faster than gpt-4
max_tokens=50 # Reduce for simple classifications
)
3. Batch Processing
# Process multiple texts together
results = await processor.process_batch(
texts=texts,
batch_size=20, # Optimal batch size
max_workers=4 # Parallel processing
)
API Reference
Core Classes
ClassificationEngine: Main engine for text classificationBatchProcessor: Batch processing capabilitiesWorkflowBuilder: Build custom classification workflowsProviderFactory: Manage and create LLM providersHealthChecker: Monitor system healthMetricsCollector: Collect and export metrics
Provider Types
ProviderType.OPENAI: OpenAI GPT modelsProviderType.ANTHROPIC: Anthropic Claude modelsProviderType.GOOGLE: Google Gemini/PaLM modelsProviderType.COHERE: Cohere Command modelsProviderType.MISTRAL: Mistral AI modelsProviderType.REPLICATE: Replicate hosted modelsProviderType.HUGGINGFACE: HuggingFace TransformersProviderType.AZURE_OPENAI: Azure OpenAI ServiceProviderType.AWS_BEDROCK: AWS Bedrock modelsProviderType.OLLAMA: Local Ollama modelsProviderType.CUSTOM: Custom HTTP-based providers
Convenience Functions
create_engine(config): Create a classification enginecreate_batch_processor(config): Create a batch processorget_supported_providers(): List available providersget_features(): List enabled features
Enterprise Support
For enterprise customers, we offer:
- Priority Support: 24/7 technical support
- Custom Integrations: Tailored solutions for your infrastructure
- On-Premise Deployment: Deploy in your own environment
- Advanced Security: SOC2, HIPAA, and GDPR compliance
- Custom Models: Fine-tuning and custom model development
- Professional Services: Implementation and consulting
Contact us at enterprise@evolvis.ai for more information.
License
This software is proprietary and owned by Evolvis AI. See the LICENSE file for details.
IMPORTANT: This is NOT open source software. Usage is subject to the terms and conditions specified in the license agreement.
Company Information
Evolvis AI Website: https://evolvis.ai Email: info@evolvis.ai
Author Alban Maxhuni, PhD Email: a.maxhuni@evolvis.ai
Support
For technical support, licensing inquiries, or enterprise solutions:
- Documentation: https://docs.evolvis.ai/text-classification-llm
- Enterprise Sales: m.miralles@evolvis.ai
- Technical Support: support@evolvis.ai
- General Inquiries: info@evolvis.ai
Copyright (c) 2025 Evolvis AI. All rights reserved.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file evolvishub_text_classification_llm-1.1.0.tar.gz.
File metadata
- Download URL: evolvishub_text_classification_llm-1.1.0.tar.gz
- Upload date:
- Size: 167.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b834e2dae4cd70194cb4203feee1d02f677adf62fe7fccd8e467e0b84c7b5e8
|
|
| MD5 |
440f7a4a958d51988f9630b7c8be3f7f
|
|
| BLAKE2b-256 |
95526977c44af7994632aaf852cb5d511572c1cce4275ff1c520083d9b41fc17
|
File details
Details for the file evolvishub_text_classification_llm-1.1.0-py3-none-any.whl.
File metadata
- Download URL: evolvishub_text_classification_llm-1.1.0-py3-none-any.whl
- Upload date:
- Size: 102.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
73e2d2b02b85fa0983242fa3aa926bc711f2083606bed12493cd89452dbd7af6
|
|
| MD5 |
0afa8b396a223c7f3f334fae7de94737
|
|
| BLAKE2b-256 |
2a47495ac0cbaa89ffb8f150ed0605a7094f0d40cb8d255a664846f694a6791d
|