Skip to main content

A multi-stage object tracking framework

Project description

Unified Tracking in PyTorch

This package is a robust object tracking framework for PyTorch. It facilitates multi-stage and cascaded tracking algorithms under various modular configurations and assignment algorithms. This open-source implementation is designed to facilitate research in computer vision and machine learning.

Table of Contents

Installation

Ensure your environment meets the following requirements:

  • python >= 3.9
  • torch >= 2.0

Install via PyPI using the following command:

pip install unitrack

Usage

The following example demonstrates object tracking across a sequence with detections that have category and position fields. This script tracks objects, updates internal state buffers for each frame, and prints the assigned IDs.

import unitrack

# Detections from 10 video frames having fields `category` and `position`.
frames = [
    {
        "category": torch.ones(1 + frame * 2, dtype=torch.long),
        "position": (torch.arange(1 + frame * 2, dtype=dtype)).unsqueeze(1),
    }
    for frame in range(0, 10)
]

# Multi-stage tracker with two value fields that map the detections' data
# to keys `pos_key` and `key_cat`, where the association stage calculates 
# the Euclidean distance of the positions between frames and subsequently 
# performs a Jonker-Volgenant assignment using the resulting cost matrix
tracker = unitrack.MultiStageTracker(
    fields={
        "key_pos": unitrack.fields.Value(key="category"),
        "key_cat": unitrack.fields.Value(key="position"),
    },
    stages=[unitrack.stages.Association(cost=costs.Distance("key_pos"), assignment=unitrack.assignment.Jonker(10))],
)

# Tracking memory that stores the relevant information to compute the
# cost matrix in the module buffers. States are observed at each frame,
# where in this case no state prediction is performed.
memory = unitrack.TrackletMemory(
    states={
        "key_pos": unitrack.states.Value(dtype),
        "key_cat": unitrack.states.Value(dtype=torch.long),
    }
)

# Iterate over frames, performing state observation, tracking and state
# propagation at every step.
for frame, detections in enumerate(frames):
    # Create a context object storing (meta)data about the current
    # frame, i.e. feature maps, instance detections and the frame number.
    ctx = unitrack.Context(None, detections, frame=frame)
    
    # Observe the states in memory. This can be extended to 
    # run a prediction step (e.g. Kalman filter) 
    obs = memory.observe()
    
    # Assign detections in the current frame to observations of
    # the state memory, giving an updated observations object
    # and the remaining unassigned new detections.
    obs, new = tracker(ctx, obs)
    
    # Update the tracking memory. Buffers are updated to match
    # the data in `obs`, and new IDs are generated for detection
    # data that could not be assigned in `new`. The returned tensor
    # contains ordered tracklet IDs for the detections assigned
    # to the frame context `ctx`.
    ids = tracks.update(ctx, obs, new)

    print(f"Assigned tracklet IDs {ids.tolist()} @ frame {frame}")

Documentation

Technical documentation is provided inline with the source code.

Contribution

Contributions that maintain backwards compatibility are welcome.

Citation

If you utilize this package in your research, please cite the following paper:

@article{unifiedperception2023,
    title={Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs},
    author={Kurt Stolle and Gijs Dubbelman},
    journal={arXiv preprint arXiv:2303.01991},
    year={2023}
}

Access the full paper here.

License

This project is licensed under MIT License.

Recommendations

The contents of this repository are designed for research purposes and is not recommended for use in production environments. It has not undergone testing for scalability or stability in a commercial context. Please use this tool within its intended scope.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unitrack-4.7.2.tar.gz (27.2 kB view details)

Uploaded Source

Built Distribution

unitrack-4.7.2-py3-none-any.whl (28.9 kB view details)

Uploaded Python 3

File details

Details for the file unitrack-4.7.2.tar.gz.

File metadata

  • Download URL: unitrack-4.7.2.tar.gz
  • Upload date:
  • Size: 27.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for unitrack-4.7.2.tar.gz
Algorithm Hash digest
SHA256 41993dc4d9ca863e253c5d198311404224c0f4f1fde17dbae9e06b0bb60a0e40
MD5 14dd23c95cd1b327333903bb27baf0cb
BLAKE2b-256 9b202ef05aba0d9473c9ce2bf43c14d834a4dd4599cf613670113e0d666099cd

See more details on using hashes here.

File details

Details for the file unitrack-4.7.2-py3-none-any.whl.

File metadata

  • Download URL: unitrack-4.7.2-py3-none-any.whl
  • Upload date:
  • Size: 28.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for unitrack-4.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 588854d4e0331d83e77da973f363d777fda998e71a8c02049446383ccfd06da4
MD5 ed2b5dc3d38668ca9c40d299985c6288
BLAKE2b-256 9378837de94fc854da334adbdfed96147cf980cce7c7ed4174ce3c4c9f69894c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page