Skip to main content

A Tool for extracting multimodal features from videos.

Project description

MMSA-Feature Extraction Toolkit

MMSA-Feature Extraction Toolkit extracts multimodal features for Multimodal Sentiment Analysis Datasets. It integrates several commonly used tools for visual, acoustic and text modality. The extracted features are compatible with the MMSA Framework and thus can be used directly. The tool can also extract features for single videos.

This work is included in the ACL-2022 DEMO paper: M-SENA: An Integrated Platform for Multimodal Sentiment Analysis. If you find our work useful, don't hesitate to cite our paper. Thank you!

@article{mao2022m,
  title={M-SENA: An Integrated Platform for Multimodal Sentiment Analysis},
  author={Mao, Huisheng and Yuan, Ziqi and Xu, Hua and Yu, Wenmeng and Liu, Yihe and Gao, Kai},
  journal={arXiv preprint arXiv:2203.12441},
  year={2022}
}

Features

  • Extract fully customized features for single videos or datasets.
  • Integrate some most commonly used tools, including Librosa, OpenFace, Transformers, etc.
  • Support Active Speaker Detection in case multiple faces exists in a video.
  • Easy to use, provides Python APIs and commandline tools.
  • Extracted features are compatible with MMSA, a unified training & testing framework for Multimodal Sentiment Analysis.

1. Installation

MMSA-Feature Extraction Toolkit is available from PyPI. Due to package size limitation on PyPi, large model files cannot be shipped with the package. Users need to run a post install command to download these files manually. If you can't access Google Drive, please refer to this page for manual download.

# Install package from PyPI
$ pip install MMSA-FET
# Download models & libraries from Google Drive. Use --proxy if needed.
$ python -m MSA_FET install

Note: A few system-wide dependancies need to be installed manually. See Dependency Installation for more information.

2. Quick Start

MMSA-FET is fairly easy to use. You can either call API in python or use commandline interface. Below is a basic example using python APIs.

Note: To extract features for datasets, the datasets need to be organized in a specific file structure, and a label.csv file is needed. See Dataset and Structure for details. Raw video files and label files for MOSI, MOSEI and CH-SIMS can be downloaded from BaiduYunDisk or Google Drive.

from MSA_FET import FeatureExtractionTool

# initialize with default librosa config which only extracts audio features
fet = FeatureExtractionTool("librosa")

# alternatively initialize with a custom config file
fet = FeatureExtractionTool("custom_config.json")

# extract features for single video
feature = fet.run_single("input.mp4")
print(feature)

# extract for dataset & save features to file
feature = fet.run_dataset(dataset_dir="~/MOSI", out_file="output/feature.pkl")

The custom_config.json is the path to a custom config file, the format of which is introduced below.

For detailed usage, please read APIs and Command Line Arguments.

3. Config File

MMSA-FET comes with a few example configs which can be used like below.

# Each supported tool has an example config
fet = FeatureExtractionTool(config="librosa")
fet = FeatureExtractionTool(config="opensmile")
fet = FeatureExtractionTool(config="wav2vec")
fet = FeatureExtractionTool(config="openface")
fet = FeatureExtractionTool(config="mediapipe")
fet = FeatureExtractionTool(config="bert")
fet = FeatureExtractionTool(config="roberta")

For customized features, you can:

  1. Edit the default configs and pass a dictionary to the config parameter like the example below:
from MSA_FET import FeatureExtractionTool, get_default_config

# here we only extract audio and video features
config_a = get_default_config('opensmile')
config_v = get_default_config('openface')

# modify default config
config_a['audio']['args']['feature_level'] = 'LowLevelDescriptors'

# combine audio and video configs
config = {**config_a, **config_v}

# initialize
fet = FeatureExtractionTool(config=config)
  1. Provide a config json file. The below example extracts features of all three modalities. To extract unimodal features, just remove unnecessary sections from the file.
{
  "audio": {
    "tool": "librosa",
    "sample_rate": null,
    "args": {
      "mfcc": {
        "n_mfcc": 20,
        "htk": true
      },
      "rms": {},
      "zero_crossing_rate": {},
      "spectral_rolloff": {},
      "spectral_centroid": {}
    }
  },
  "video": {
    "tool": "openface",
    "fps": 25,
    "average_over": 3,
    "args": {
      "hogalign": false,
      "simalign": false,
      "nobadaligned": false,
      "landmark_2D": true,
      "landmark_3D": false,
      "pdmparams": false,
      "head_pose": true,
      "action_units": true,
      "gaze": true,
      "tracked": false
    }
  },
  "text": {
    "model": "bert",
    "device": "cpu",
    "pretrained": "models/bert_base_uncased",
    "args": {}
  }
}

4. Supported Tools & Features

4.1 Audio Tools

4.2 Video Tools

  • OpenFace (link)

    Supports all features in OpenFace's FeatureExtraction binary, including: facial landmarks in 2D and 3D, head pose, gaze related, facial action units, HOG binary files. Details of these features can be found in the OpenFace Wiki here and here. Detailed configurations can be found here.

  • MediaPipe (link)

    Supports face mesh and holistic(face, hand, pose) solutions. Detailed configurations can be found here.

  • TalkNet(link)

    TalkNet is used to support Active Speaker Detection in case there are multiple human faces in the video.

4.3 Text Tools

  • BERT (link)

    Integrated from huggingface transformers. Detailed configurations can be found here.

  • XLNet (link)

    Integrated from huggingface transformers. Detailed configurations can be found here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MMSA-FET-0.4.1.tar.gz (59.2 kB view details)

Uploaded Source

Built Distribution

MMSA_FET-0.4.1-py3-none-any.whl (72.3 kB view details)

Uploaded Python 3

File details

Details for the file MMSA-FET-0.4.1.tar.gz.

File metadata

  • Download URL: MMSA-FET-0.4.1.tar.gz
  • Upload date:
  • Size: 59.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for MMSA-FET-0.4.1.tar.gz
Algorithm Hash digest
SHA256 95f0c7cf2a396541a86313e251caca45065fd75b1d6390bc6c4ae4de484d5c5a
MD5 97417412196e6add029dcb8d80e4c166
BLAKE2b-256 3a0f83816f5865bd85e2c4c5bf50d37ea24d7b166137582971acb508f7e0c5d2

See more details on using hashes here.

File details

Details for the file MMSA_FET-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: MMSA_FET-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 72.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for MMSA_FET-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 37c8ecfeea1d8dbdc59d666e021c45b12aeb1b4cd4cff07df986075685b26d5b
MD5 950317041106a57e4621290e1446170f
BLAKE2b-256 bc32d0c8cf5d88c50a645845671cb0d875fb320405a981d284919853103c9881

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page