Template component for Leolani
Project description
cltl-emotion-detection
Detects emotions in texts and in faces and annotates the signal with the emotion and sentiment labels and scores.
Getting started
Prerequisites
This repository uses Python >= 3.9
Be sure to run in a virtual python environment (e.g. conda, venv, mkvirtualenv, etc.)
Installation
-
Create a virtual environment:
python -m venv venv source venv/bin/activate python -m pip install --upgrade pip
-
In the root directory of this repo run
pip install -r requirements.txt
Usage
Examples
Please take a look at the example scripts provided to get an idea on how to run and use this package. Each example has a comment at the top of the script describing the behaviour of the script.
For these example scripts, you need
-
A repository on GraphDB Free called
sandbox
. To run any example script you first have to launch GraphDB, and then you can run the example script. -
To change your current directory to
./examples/
-
Run some examples (e.g.
python carl.py
)
The cltl-emotion-detection has several APIs for usage:
Detects emotions in texts and in faces and annotations the signals with the emotion labels and scores.
- Calling the emotion detection modules directly
- Reading an interaction saved in the EMISSOR format and annotate this with emotions
- Integrated in the Leolani event-bus platform
For calling the separate classifiers directly, each of the submodules need to be called as defined separately. For 2) and 3), a uniform wrapper is provided.
The annotations are pushed to the event bus and can be taken up for further processing:
- respond directly to the emotion using the emotion_responder
- create a capsule to add the emotion as a perspective to the episodic Knowledge Graph
Integration in the Leolani event-bus
Both emotion extractors and the responder can be integrated in the event-bus and to generate annotations in EMISSOR through a service.py that is included. In the configuration file of the event-bus,the input and output topics need to specified as well as the emotion detectors.
Emotion responder
A simple emotion responder is included that generates a direct text response to the emotion of a certain type (GO, emotic, Ekman, Sentiment) and given other properties such as score and source.
We implemented two separate emotion detectors modules: 1) text based emotion extraction and 2) face emotion.
Text based emotion extraction
We implemented three different extractors:
- a BERT model fine-tuned with the 27 GO emotions provided by Google
- a RoBERTa model fine-tuned with labeled utterances from the Friends sitcom (MELD data with Ekman emotions)
- VADER-NLTK, which is a rule based system using a lexicon derived from Tweets annotated with sentiment values
We provided a mapping from GO to Ekman and from GO to sentiment, as well as a mapping from Ekman to sentiment. Each extractor thus minimally generates sentiment values, but possibly also Ekman labels and GO labels above a threshold.
Face based emotion extraction
We implemented the "emotic" module for face detection in context. This module is trained using the emotic database with 26 emotions. The module determines the emotion on the basis of facial features, posture and the situation:
References:
- Kosti R., J.M. Alvarex, A. Recasens, and A. Paedriza, (2019), "Context based emotion recognition using emotic dataset", IEEE Transactions on patterns analysis and machine intelligence.
- http://sunai.uoc.edu/emotic/index.html
- https://github.com/rkosti/emotic
- https://github.com/Tandon-A/emotic
The classifier returns emotion labels with a confidence score but also with a so-called Continuous Dimension score for Arousal (strength), Valence (positive or negative) and Dominance:
- Mehrabian A. Framework for a comprehensive description and measurement of emotional states. Genet Soc Gen Psychol Monogr. 1995 Aug;121(3):339-61. PMID: 7557355.
Just as with the text based emotion, we map the emotic labels to Ekman labels and sentiment.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cltl_emotionrecognition-0.1.dev1.tar.gz
.
File metadata
- Download URL: cltl_emotionrecognition-0.1.dev1.tar.gz
- Upload date:
- Size: 20.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c870e1d1adb3e2010306b582029c1712ffc36948efc135eb0ef80645f6c953d6 |
|
MD5 | df74eb73887e5b7e3a3e3ad133b80edf |
|
BLAKE2b-256 | 1d2f4f16a240e5ecba1d09acccfdaf151203609f90826192c405eaaa0c220147 |
File details
Details for the file cltl.emotionrecognition-0.1.dev1-py3-none-any.whl
.
File metadata
- Download URL: cltl.emotionrecognition-0.1.dev1-py3-none-any.whl
- Upload date:
- Size: 25.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20bb5cb47ecb1ea763cb84e27c685e6ac61aab06c51eeba37b0957d86527d214 |
|
MD5 | 0f3c214f651eb7c390f3f6ab7eae24f5 |
|
BLAKE2b-256 | aa2089eadae3fa6e35e4d5da49cb312157dd56b42ce1e2f077bad2dab2ea38cd |