Skip to main content

Video reading into numpy

Project description

video2numpy

pypi Open In Colab Try it on gitpod

Optimized library for large-scale extraction of frames and audio from video.

Install

pip install video2numpy

Or build from source:

python setup.py install

Usage

NAME
    video2numpy - Read frames from videos and save as numpy arrays

SYNOPSIS
    video2numpy SRC <flags>

DESCRIPTION
    Input:
    src:
        str: path to mp4 file
        str: youtube link
        str: path to txt file with multiple mp4's or youtube links
        list: list with multiple mp4's or youtube links
    dest:
        str: directory where to save frames to
        None: dest = src + .npy
    take_every_nth:
        int: only take every nth frame
    resize_size:
        int: new pixel height and width of resized frame
    workers:
        int: number of workers used to read videos
    memory_size:
        int: number of GB of shared memory used for reading, use larger shared memory for more videos

POSITIONAL ARGUMENTS
    SRC

FLAGS
    --dest=DEST
        Default: ''
    --take_every_nth=TAKE_EVERY_NTH
        Default: 1
    --resize_size=RESIZE_SIZE
        Default: 224
    --workers=WORKERS
        Default: 1
    --memory_size=MEMORY_SIZE
        Default: 4

NOTES
    You can also use flags syntax for POSITIONAL ARGUMENTS

API

This module exposes a single function video2numpy which takes the same arguments as the command line tool:

import glob
from video2numpy import video2numpy

VIDS = glob.glob("some/path/my_videos/*.mp4")
FRAME_DIR = "some/path/my_frames"
take_every_5 = 5

video2numpy(VIDS, FRAME_DIR, take_every_5)

You can also directly use the reader and iterate over videos yourself:

import glob
from video2numpy.frame_reader import FrameReader

VIDS = glob.glob("some/path/my_videos/*.mp4")
take_every_5 = 5
resize_size = 300
batch_size = 64 # output shape will be (n, batch_size, height, width, 3)

reader = FrameReader(VIDS, take_every_5, resize_size, batch_size)
reader.start_reading()

for vid_frames, info_dict in reader:
    # info_dict["dst_name"] - name for saving numpy array
    # info_dict["pad_by"] - how many pad frames were added to final block so n_frames % batch_size == 0
    # do something with vid_frames of shape (n_blocks, 64, 300, 300, 3)
    ...

For development

Either locally, or in gitpod (do export PIP_USER=false there)

Setup a virtualenv:

python3 -m venv .env
source .env/bin/activate
pip install -e .

to run tests:

pip install -r requirements-test.txt

then

make lint
make test

You can use make black to reformat the code

python -m pytest -x -s -v tests -k "dummy" to run a specific test

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

video2numpy-2.3.3.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

video2numpy-2.3.3-py3-none-any.whl (16.1 kB view details)

Uploaded Python 3

File details

Details for the file video2numpy-2.3.3.tar.gz.

File metadata

  • Download URL: video2numpy-2.3.3.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for video2numpy-2.3.3.tar.gz
Algorithm Hash digest
SHA256 291fb20f3f75627f4f073fab7c4dbec7d8ef43b41dcd4e3ec67e4e7b388fa0e9
MD5 1d56758aff6799905fd353d4b3d4f6e1
BLAKE2b-256 aa4c803efa4f670410f49fa482be4be8c3eeaa3e93b155072670cbc351308342

See more details on using hashes here.

File details

Details for the file video2numpy-2.3.3-py3-none-any.whl.

File metadata

  • Download URL: video2numpy-2.3.3-py3-none-any.whl
  • Upload date:
  • Size: 16.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for video2numpy-2.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f5716a5e902fac19f851bbea59cde91a7c61b2113ee1cf7fce4f8f81eb7864ae
MD5 2f699388bd6bdbb1df8b6170ff90de37
BLAKE2b-256 2412ba8ea11b24ced9ab1dd1c5ebde5ef7799da0df37148237b5da55949a3fd5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page