Skip to main content

No project description provided

Project description



Documentation actions codecov PyPI Percentage of issues still open

Introduction

English | 简体中文

XRMoCap is an open-source PyTorch-based codebase for the use of multi-view motion capture. It is a part of the OpenXRLab project.

If you are interested in single-view motion capture, please refer to mmhuman3d for more details.

https://user-images.githubusercontent.com/26729379/187710195-ba4660ce-c736-4820-8450-104f82e5cc99.mp4

A detailed introduction can be found in introduction.md.

Major Features

  • Support popular multi-view motion capture methods for single person and multiple people

    XRMoCap reimplements SOTA multi-view motion capture methods, ranging from single person to multiple people. It supports an arbitrary number of calibrated cameras greater than 2, and provides effective strategies to automatically select cameras.

  • Support keypoint-based and parametric human model-based multi-view motion capture algorithms

    XRMoCap supports two mainstream motion representations, keypoints3d and SMPL(-X) model, and provides tools for conversion and optimization between them.

  • Integrate optimization-based and learning-based methods into one modular framework

    XRMoCap decomposes the framework into several components, based on which optimization-based and learning-based methods are integrated into one framework. Users can easily prototype a customized multi-view mocap pipeline by choosing different components in configs.

News

  • 2022-12-21: XRMoCap v0.7.0 is released. Major updates include:
    • Add mview_mperson_end2end_estimator for learning-based method
    • Add SMPLX support and allow smpl_data initiation in mview_sperson_smpl_estimator
    • Add multiple optimizers, detailed joint weights and priors, grad clipping for better SMPLify results
    • Add mediapipe_estimator for human keypoints2d perception
  • 2022-10-14: XRMoCap v0.6.0 is released. Major updates include:
    • Add 4D Association Graph, the first Python implementation to reproduce this algorithm
    • Add Multi-view multi-person top-down smpl estimation
    • Add reprojection error point selector
  • 2022-09-01: XRMoCap v0.5.0 is released. Major updates include:

Benchmark

More details can be found in benchmark.md.

Supported methods:

(click to collapse)

Supported datasets:

(click to collapse)

Getting Started

Please see getting_started.md for the basic usage of XRMoCap.

License

The license of our codebase is Apache-2.0. Note that this license only applies to code in our library, the dependencies of which are separate and individually licensed. We would like to pay tribute to open-source implementations to which we rely on. Please be aware that using the content of dependencies may affect the license of our codebase. Refer to LICENSE to view the full license.

Citation

If you find this project useful in your research, please consider cite:

@misc{xrmocap,
    title={OpenXRLab Multi-view Motion Capture Toolbox and Benchmark},
    author={XRMoCap Contributors},
    howpublished = {\url{https://github.com/openxrlab/xrmocap}},
    year={2022}
}

Contributing

We appreciate all contributions to improve XRMoCap. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

XRMoCap is an open source project that is contributed by researchers and engineers from both the academia and the industry. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.

Projects in OpenXRLab

  • XRPrimer: OpenXRLab foundational library for XR-related algorithms.
  • XRSLAM: OpenXRLab Visual-inertial SLAM Toolbox and Benchmark.
  • XRSfM: OpenXRLab Structure-from-Motion Toolbox and Benchmark.
  • XRLocalization: OpenXRLab Visual Localization Toolbox and Server.
  • XRMoCap: OpenXRLab Multi-view Motion Capture Toolbox and Benchmark.
  • XRMoGen: OpenXRLab Human Motion Generation Toolbox and Benchmark.
  • XRNeRF: OpenXRLab Neural Radiance Field (NeRF) Toolbox and Benchmark.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xrmocap-0.8.0.tar.gz (131.1 kB view details)

Uploaded Source

Built Distribution

xrmocap-0.8.0-py2.py3-none-any.whl (209.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file xrmocap-0.8.0.tar.gz.

File metadata

  • Download URL: xrmocap-0.8.0.tar.gz
  • Upload date:
  • Size: 131.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.16

File hashes

Hashes for xrmocap-0.8.0.tar.gz
Algorithm Hash digest
SHA256 9704486341b8d5f2d3fb6444c6b49ac0bab9ae3ba6e1438a02c1fafefe41e667
MD5 3e0556c942dea49ab62ccd96f1fa84f6
BLAKE2b-256 59dbb27f003a9fed7d2db23f175650ba0e466bfa55c8eb8047d4f47bfd9a1a11

See more details on using hashes here.

File details

Details for the file xrmocap-0.8.0-py2.py3-none-any.whl.

File metadata

  • Download URL: xrmocap-0.8.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 209.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.16

File hashes

Hashes for xrmocap-0.8.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 97b60f859b21293d5b9a9042f9a381b93c6e2f263268f1ef968a6abf737adc00
MD5 1d9a3323a5fec9e92dee1a0a392b8359
BLAKE2b-256 07fe38b1fafdce9aba679a38cf5db60a58eca03f79eabbf5cd11619aa943f575

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page