Skip to main content

Vital sign estimation from facial video

Project description

vitallens-python

Tests PyPI Downloads Website DOI

Estimate vital signs such as heart rate and respiratory rate from video.

vitallens-python is a Python client for the VitalLens API, using the same neural net for inference as our free iOS app VitalLens. Furthermore, it includes fast implementations of several other heart rate estimation methods from video such as G, CHROM, and POS.

  • Accepts as input either a video filepath or an in-memory video as np.ndarray
  • Performs fast face detection if required - you can also pass existing detections
  • vitallens.Method.VITALLENS supports heart rate, respiratory rate, pulse waveform, and respiratory waveform estimation. In addition, it returns an estimation confidence for each vital. We are working to support more vital signs in the future.
  • vitallens.Method.{G/CHROM/POS} support faster, but less accurate heart rate and pulse waveform estimation.
  • While VITALLENS requires an API Key, G, CHROM, and POS do not. Register on our website to get a free API Key.

Estimate vitals in a few lines of code:

from vitallens import VitalLens, Method

vl = VitalLens(method=Method.VITALLENS, api_key="YOUR_API_KEY")
result = vl("video.mp4")
print(result)

Disclaimer

vitallens-python provides vital sign estimates for general wellness purposes only. It is not intended for medical use. Always consult with your doctor for any health concerns or for medically precise measurement.

See also our Terms of Service for the VitalLens API and our Privacy Policy.

Installation

General prerequisites are python>=3.8 and ffmpeg installed and accessible via the $PATH environment variable.

The easiest way to install the latest version of vitallens-python and its Python dependencies:

pip install vitallens

Alternatively, it can be done by cloning the source:

git clone https://github.com/Rouast-Labs/vitallens-python.git
pip install ./vitallens-python

How to use

To start using vitallens-python, first create an instance of vitallens.VitalLens. It can be configured using the following parameters:

Parameter Description Default
method Inference method. {Method.VITALLENS, Method.POS, Method.CHROM or Method.G} Method.VITALLENS
mode Operation mode. {Mode.BATCH for indep. videos or Mode.BURST for video stream} Mode.BATCH
api_key Usage key for the VitalLens API (required for Method.VITALLENS) None
detect_faces True if faces need to be detected, otherwise False. True
estimate_running_vitals Set True to compute running vitals (e.g., running_heart_rate). True
fdet_max_faces The maximum number of faces to detect (if necessary). 1
fdet_fs Frequency [Hz] at which faces should be scanned - otherwise linearly interpolated. 1.0
export_to_json If True, write results to a json file. True
export_dir The directory to which json files are written. .

Once instantiated, vitallens.VitalLens can be called to estimate vitals. In Mode.BATCH calls are assumed to be working on independent videos, whereas in Mode.BURST we expect the subsequent calls to pass the next frames of the same video (stream) as np.ndarray. Calls are configured using the following parameters:

Parameter Description Default
video The video to analyze. Either a path to a video file or np.ndarray. More info here.
faces Face detections. Ignored unless detect_faces=False. More info here. None
fps Sampling frequency of the input video. Required if video is np.ndarray. None
override_fps_target Target frequency for inference (optional - use methods's default otherwise). None
export_filename Filename for json export if applicable. None

The estimation results are returned as a list. It contains a dict for each distinct face, with the following structure:

[
  {
    'face': {
      'coordinates': <Face coordinates for each frame as np.ndarray of shape (n_frames, 4)>,
      'confidence': <Face live confidence for each frame as np.ndarray of shape (n_frames,)>,
      'note': <Explanatory note>
    },
    'vital_signs': {
      'heart_rate': {
        'value': <Estimated global value as float scalar>,
        'unit': <Value unit>,
        'confidence': <Estimation confidence as float scalar>,
        'note': <Explanatory note>
      },
      'respiratory_rate': {
        'value': <Estimated global value as float scalar>,
        'unit': <Value unit>,
        'confidence': <Estimation confidence as float scalar>,
        'note': <Explanatory note>
      },
      'ppg_waveform': {
        'data': <Estimated waveform value for each frame as np.ndarray of shape (n_frames,)>,
        'unit': <Data unit>,
        'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
        'note': <Explanatory note>
      },
      'respiratory_waveform': {
        'data': <Estimated waveform value for each frame as np.ndarray of shape (n_frames,)>,
        'unit': <Data unit>,
        'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
        'note': <Explanatory note>
      },
    },
    "message": <Message about estimates>
  },
  { 
    <same structure for face 2 if present>
  },
  ...
]

If the video is long enough and estimate_running_vitals=True, the results additionally contain running vitals:

[
  {
    ...
    'vital_signs': {
      ...
      'running_heart_rate': {
        'data': <Estimated value for each frame as np.ndarray of shape (n_frames,)>,
        'unit': <Value unit>,
        'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
        'note': <Explanatory note>
      },
      'running_respiratory_rate': {
        'data': <Estimated value for each frame as np.ndarray of shape (n_frames,)>,
        'unit': <Value unit>,
        'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
        'note': <Explanatory note>
      }
    }
  ...
  },
  ...
]

Examples to get started

Live test with webcam in real-time

Test vitallens in real-time with your webcam using the script examples/live.py. This uses Mode.BURST to update results continuously (approx. every 2 seconds for Method.VITALLENS). Some options are available:

  • method: Choose from [VITALLENS, POS, G, CHROM] (Default: VITALLENS)
  • api_key: Pass your API Key. Required if using method=VITALLENS.

May need to install requirements first: pip install opencv-python

python examples/live.py --method=VITALLENS --api_key=YOUR_API_KEY

Compare results with gold-standard labels using our example script

There is an example Python script in examples/test.py which uses Mode.BATCH to run vitals estimation and plot the predictions against ground truth labels recorded with gold-standard medical equipment. Some options are available:

  • method: Choose from [VITALLENS, POS, G, CHROM] (Default: VITALLENS)
  • video_path: Path to video (Default: examples/sample_video_1.mp4)
  • vitals_path: Path to gold-standard vitals (Default: examples/sample_vitals_1.csv)
  • api_key: Pass your API Key. Required if using method=VITALLENS.

May need to install requirements first: pip install matplotlib pandas

For example, to reproduce the results from the banner image on the VitalLens API Webpage:

python examples/test.py --method=VITALLENS --video_path=examples/sample_video_2.mp4 --vitals_path=examples/sample_vitals_2.csv --api_key=YOUR_API_KEY

This sample is kindly provided by the VitalVideos dataset.

Use VitalLens API to estimate vitals from a video file

from vitallens import VitalLens, Method

vl = VitalLens(method=Method.VITALLENS, api_key="YOUR_API_KEY")
result = vl("video.mp4")

Use POS method on an np.ndarray of video frames

from vitallens import VitalLens, Method

my_video_arr = ...
my_video_fps = 30
vl = VitalLens(method=Method.POS)
result = vl(my_video_arr, fps=my_video_fps)

Run example script with Docker

If you encounter issues installing vitallens-python dependencies directly, you can use our Docker image, which contains all necessary tools and libraries. This docker image is set up to execute the example Python script in examples/test.py for you.

Prerequisites

  • Docker installed on your system.

Usage

  1. Clone the repository
git clone https://github.com/Rouast-Labs/vitallens-python.git && cd vitallens-python
  1. Build the Docker image
docker build -t vitallens .
  1. Run the Docker container

To run the example script on the sample video:

docker run vitallens \          
  --api_key "your_api_key_here" \
  --vitals_path "examples/sample_vitals_2.csv" \
  --video_path "examples/sample_video_2.mp4" \
  --method "VITALLENS"

You can also run it on your own video:

docker run vitallens \          
  --api_key "your_api_key_here" \
  --video_path "path/to/your/video.mp4" \
  --method "VITALLENS"
  1. View the results

The results will print to the console in text form.

Please note that the example script plots won't work when running them through Docker. To to get the plot as an image file, run:

docker cp <container_id>:/app/results.png .

Linting and tests

Before running tests, please make sure that you have an environment variable VITALLENS_DEV_API_KEY set to a valid API Key. To lint and run tests:

flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
pytest

Build

To build:

python -m build

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vitallens-0.4.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

vitallens-0.4-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file vitallens-0.4.tar.gz.

File metadata

  • Download URL: vitallens-0.4.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for vitallens-0.4.tar.gz
Algorithm Hash digest
SHA256 8fae718ead5c4e7fa70fea5e57f07276f0a69a457e3871fb231479aebeb23545
MD5 f2596359eff3b4a0c5a5a021123859f4
BLAKE2b-256 7658607c03e84d161b02790241ed95c1f5cf9c44181420ca652550f030a74a09

See more details on using hashes here.

File details

Details for the file vitallens-0.4-py3-none-any.whl.

File metadata

  • Download URL: vitallens-0.4-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for vitallens-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 91ac42a3c47855d73f76a5eafdcb0b36fa8972b130ddf81768061cddf0838701
MD5 81fadfdfc6c60e6ee3d36e41542251ae
BLAKE2b-256 3f536f4ef3b813f79d855721e2bb9f14f5a139ef23c4cdb25e620d44821f3e0c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page