Skip to main content

Multimodal emotion recognition framework for video analysis

Project description

Emotion Framework

A comprehensive multimodal emotion recognition framework for video analysis powered by deep learning.

Python Version License PyPI version

🎯 Features

  • Multimodal Analysis: Combines audio, visual, and text features for robust emotion recognition
  • Multiple Fusion Strategies: Choose from various fusion approaches (early, late, hybrid)
  • Pre-trained Models: Includes state-of-the-art models (RFRBoost, Attention-Deep, MLP Baseline)
  • Real-time Support: Process video streams in real-time with configurable window sizes
  • AI-Powered Insights: Optional LLM-based analysis for meeting insights
  • Mental Health Scoring: Comprehensive emotion-based mental health assessment
  • Easy Integration: Simple API for quick integration into your applications

📦 Installation

pip install emotion-framework

System Dependencies

The framework requires some system-level dependencies:

Ubuntu/Debian:

sudo apt-get update
sudo apt-get install -y ffmpeg libgl1-mesa-glx libglib2.0-0

macOS:

brew install ffmpeg

Windows:

  • Install ffmpeg and add to PATH

🚀 Quick Start

from emotion_framework import EmotionAnalysisPipeline
from emotion_framework.core.config_loader import load_framework_config

# Initialize the pipeline
config = load_framework_config()
pipeline = EmotionAnalysisPipeline(config)

# Analyze a video
result = pipeline.analyze_video("path/to/video.mp4")

# Access results
print(f"Predicted Emotion: {result.prediction.predicted_emotion}")
print(f"Confidence: {result.prediction.confidence:.2f}")
print(f"Processing Time: {result.processing_time:.2f}s")

# Get temporal predictions
for temporal_pred in result.temporal_predictions:
    print(f"Time: {temporal_pred.timestamp}s - Emotion: {temporal_pred.emotion}")

# Mental health analysis
if result.mental_health_analysis:
    mh = result.mental_health_analysis
    print(f"Mental Health Score: {mh.mental_health_score}/100")
    print(f"Status: {mh.status}")
    print(f"Recommendation: {mh.recommendation}")

📊 Advanced Usage

Custom Configuration

from emotion_framework import EmotionAnalysisPipeline

# Create custom config
config = {
    "fusion_strategy": "hybrid",  # early, late, or hybrid
    "extract_audio": True,
    "extract_visual": True,
    "extract_text": True,
    "fps_for_analysis": 1,  # Extract 1 frame per second
}

pipeline = EmotionAnalysisPipeline(config)

# Analyze with options
options = {
    "fusion_strategy": "late",
    "run_ai_analysis": True,
    "llm_provider": "openai"
}

result = pipeline.analyze_video("video.mp4", options)

Real-time Analysis

from emotion_framework.core.realtime_pipeline import RealtimeEmotionAnalyzer

# Initialize real-time analyzer
analyzer = RealtimeEmotionAnalyzer(
    window_size=4.0,  # 4-second windows
    stride=1.0,       # 1-second stride
)

# Process video stream
for chunk_result in analyzer.analyze_stream("rtsp://camera-url"):
    print(f"Real-time emotion: {chunk_result.emotion}")

AI-Powered Meeting Analysis

import os
os.environ["OPENAI_API_KEY"] = "your-api-key"

options = {
    "run_ai_analysis": True,
    "llm_provider": "openai",
    "llm_model": "gpt-4"
}

result = pipeline.analyze_video("meeting.mp4", options)

if result.ai_analysis:
    print(f"Summary: {result.ai_analysis.summary}")
    print(f"Key Insights: {result.ai_analysis.key_insights}")
    print(f"Recommendations: {result.ai_analysis.recommendations}")

📖 API Reference

EmotionAnalysisPipeline

Main class for emotion analysis.

Methods:

  • analyze_video(video_path: str, options: dict = None) -> EmotionAnalysisResult

EmotionAnalysisResult

Contains all analysis results.

Attributes:

  • prediction: Overall emotion prediction
  • temporal_predictions: Frame-by-frame predictions
  • mental_health_analysis: Mental health assessment
  • transcription: Speech-to-text results
  • ai_analysis: AI-generated insights
  • metadata: Video metadata
  • features: Extracted features
  • processing_time: Total processing time

🎨 Supported Emotions

  • Happy: Joy, contentment, positive emotions
  • Sad: Sorrow, disappointment, low mood
  • Angry: Frustration, irritation, rage
  • Fear: Anxiety, worry, nervousness
  • Surprise: Shock, amazement, unexpected reactions
  • Disgust: Aversion, repulsion, distaste
  • Neutral: Calm, balanced, no strong emotion

🧠 Models & Architecture

The framework uses a hierarchical approach:

  1. Feature Extraction

    • Audio: librosa, openSMILE, pyAudioAnalysis
    • Visual: OpenCV, MediaPipe, py-feat
    • Text: Transformers, BERT, sentence-transformers
  2. Fusion Strategies

    • Early Fusion: Combine features before classification
    • Late Fusion: Combine predictions after classification
    • Hybrid Fusion: Adaptive combination based on modality confidence
  3. Classification Models

    • RFRBoost: Random Feature Representation with Boosting
    • Attention-Deep: Deep learning with attention mechanisms
    • MLP Baseline: Multi-layer perceptron baseline

🔧 Configuration

Create a config.yaml file:

# Feature Extraction
extract_audio: true
extract_visual: true
extract_text: true
fps_for_analysis: 1

# Fusion Strategy
fusion_strategy: "hybrid"  # early, late, hybrid

# AI Analysis
enable_ai_analysis: false
llm_provider: "openai"  # or "local"
llm_model: "gpt-4"

# Paths (optional)
pretrained_models_path: "./pretrained"
temp_directory: "./temp"

Load it:

from emotion_framework.core.config_loader import load_framework_config

config = load_framework_config("path/to/config.yaml")
pipeline = EmotionAnalysisPipeline(config)

🛠️ Development

Installation for Development

git clone https://github.com/yourusername/emotion-framework.git
cd emotion-framework
pip install -e ".[dev]"

Running Tests

pytest tests/

📋 Requirements

  • Python 3.8+
  • PyTorch 2.2+
  • OpenCV 4.8+
  • librosa 0.10+
  • transformers 4.30+
  • ffmpeg (system dependency)

See setup.py for complete list.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Built with PyTorch, transformers, and OpenCV
  • Inspired by state-of-the-art multimodal emotion recognition research
  • Thanks to the open-source ML community

📧 Contact

For questions, issues, or contributions:

🗺️ Roadmap

  • GPU acceleration optimization
  • Additional fusion strategies
  • More pre-trained models
  • Web UI for demo
  • Cloud deployment support
  • Mobile SDK

Made with ❤️ by the Emotion Analysis Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

emotion_framework-1.0.0.tar.gz (35.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

emotion_framework-1.0.0-py3-none-any.whl (40.7 kB view details)

Uploaded Python 3

File details

Details for the file emotion_framework-1.0.0.tar.gz.

File metadata

  • Download URL: emotion_framework-1.0.0.tar.gz
  • Upload date:
  • Size: 35.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for emotion_framework-1.0.0.tar.gz
Algorithm Hash digest
SHA256 21e6b2ad4f2e4ccf715aa0a8c6bd9dac444e265aa6fb02cf28de5a0f9925ec51
MD5 f77bd4003980fbf8e63cf37a9b5b26ae
BLAKE2b-256 ad8331ec90839d504fb0c1fc04751befa5bd1e2401d3842cd0dd7a823068ce35

See more details on using hashes here.

File details

Details for the file emotion_framework-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for emotion_framework-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 62da8b0b5cd8f152bc9b150359a61d8a9ceb95bef996846509e0102e52755e16
MD5 432be0cc5aaa701107cc3d99afd83fb0
BLAKE2b-256 0398194e9575fa9762d051585117a495ed65a60bebf74a50a7f2505ec184fb1e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page