Skip to main content

Vital sign estimation from facial video

Project description

vitallens-python

Tests PyPI Downloads Website DOI

Estimate vital signs such as heart rate and respiratory rate from video.

vitallens-python is a Python client for the VitalLens API, using the same neural net for inference as our free iOS app VitalLens. Furthermore, it includes fast implementations of several other heart rate estimation methods from video such as G, CHROM, and POS.

  • Accepts as input either a video filepath or an in-memory video as np.ndarray
  • Performs fast face detection if required - you can also pass existing detections
  • vitallens.Method.VITALLENS supports heart rate, respiratory rate, pulse waveform, and respiratory waveform estimation. In addition, it returns an estimation confidence for each vital. We are working to support more vital signs in the future.
  • vitallens.Method.{G/CHROM/POS} support faster, but less accurate heart rate and pulse waveform estimation.
  • While VITALLENS requires an API Key, G, CHROM, and POS do not. Register on our website to get a free API Key.

Estimate vitals in a few lines of code:

from vitallens import VitalLens, Method

vl = VitalLens(method=Method.VITALLENS, api_key="YOUR_API_KEY")
result = vl("video.mp4")
print(result)

Disclaimer

vitallens-python provides vital sign estimates for general wellness purposes only. It is not intended for medical use. Always consult with your doctor for any health concerns or for medically precise measurement.

See also our Terms of Service for the VitalLens API and our Privacy Policy.

Installation

General prerequisites are python>=3.8 and ffmpeg installed and accessible via the $PATH environment variable.

The easiest way to install the latest version of vitallens-python and its Python dependencies:

pip install vitallens

Alternatively, it can be done by cloning the source:

git clone https://github.com/Rouast-Labs/vitallens-python.git
pip install ./vitallens-python

How to use

To start using vitallens-python, first create an instance of vitallens.VitalLens. It can be configured using the following parameters:

Parameter Description Default
method Inference method. {Method.VITALLENS, Method.POS, Method.CHROM or Method.G} Method.VITALLENS
api_key Usage key for the VitalLens API (required for Method.VITALLENS) None
detect_faces True if faces need to be detected, otherwise False. True
fdet_max_faces The maximum number of faces to detect (if necessary). 2
fdet_fs Frequency [Hz] at which faces should be scanned - otherwise linearly interpolated. 1.0

Once instantiated, vitallens.VitalLens can be called to estimate vitals. This can also be configured using the following parameters:

Parameter Description Default
video The video to analyze. Either a path to a video file or np.ndarray. More info here.
faces Face detections. Ignored unless detect_faces=False. More info here. None
fps Sampling frequency of the input video. Required if video is np.ndarray. None
override_fps_target Target frequency for inference (optional - use methods's default otherwise). None

The estimation results are returned as a list. It contains a dict for each distinct face, with the following structure:

[
  {
    'face': <face coords for each frame as np.ndarray of shape (n_frames, 4)>,
    'pulse': {
      'val': <estimated pulse waveform val for each frame as np.ndarray of shape (n_frames,)>,
      'conf': <estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
    },
    'resp': {
      'val': <estimated respiration waveform val for each frame as np.ndarray of shape (n_frames,)>,
      'conf': <estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
    },
    'hr': {
      'val': <estimated heart rate as float scalar>,
      'conf': <estimation confidence as float scalar>,
    },
    'rr': {
      'val': <estimated respiratory rate as float scalar>,
      'conf': <estimation confidence as float scalar>,
    },
    'live': <liveness estimation for each frame as np.ndarray of shape (n_frames,)>,
  },
  { 
    <same structure for face 2 if present>
  },
  ...
]

Example: Use VitalLens API to estimate vitals from a video file

from vitallens import VitalLens, Method

vl = VitalLens(method=Method.VITALLENS, api_key="YOUR_API_KEY")
result = vl("video.mp4")

Example: Use POS method on an np.ndarray of video frames

from vitallens import VitalLens, Method

my_video_arr = ...
my_video_fps = 30
vl = VitalLens(method=Method.POS)
result = vl(my_video_arr, fps=my_video_fps)

Linting and tests

Before running tests, please make sure that you have an environment variable VITALLENS_DEV_API_KEY set to a valid API Key. To lint and run tests:

flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
pytest

Build

To build:

python -m build

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vitallens-0.1.2.tar.gz (29.4 MB view hashes)

Uploaded Source

Built Distribution

vitallens-0.1.2-py3-none-any.whl (1.1 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page