Skip to main content

No project description provided

Project description

Alt text

Georgia Tech Structure from Motion (GTSfM) Library

Platform Build Status
Ubuntu 20.04.3 Linux CI

What is GTSfM?

GTSfM is an end-to-end SfM pipeline based on GTSAM. GTSfM was designed from the ground-up to natively support parallel computation using Dask.

License

The majority of our code is governed by a MIT license and is suitable for commercial use. However, certain implementations featured in our repo (SuperPoint, SuperGlue) are governed by a non-commercial license and may not be used commercially.

Installation

GTSfM requires no compilation, as Python wheels are provided for GTSAM.

To install GTSfM, first, we need to create a conda environment.

Linux On Linux, with CUDA support:

conda env create -f environment_linux.yml
conda activate gtsfm-v1 # you may need "source activate gtsfm-v1" depending upon your bash and conda set-up

Mac On Mac OSX, there is no CUDA support, so run:

conda env create -f environment_mac.yml
conda activate gtsfm-v1

Completing Installation

Now, install gtsfm as a module:

pip install -e .

Make sure that you can run python -c "import gtsfm; import gtsam; print('hello world')" in python, and you are good to go!

Usage Guide (Running 3d Reconstruction)

Before running reconstruction, if you intend to use modules with pre-trained weights, such as SuperPoint, SuperGlue, or PatchmatchNet, please first run:

./download_model_weights.sh

To run SfM with a dataset with only an image directory and EXIF, with image file names ending with "jpg", please create the following file structure like

└── {DATASET_NAME}
       ├── images
               ├── image1.jpg
               ├── image2.jpg
               ├── image3.jpg

and run

python gtsfm/runner/run_scene_optimizer_olssonloader.py --config_name {CONFIG_NAME} --dataset_root {DATASET_ROOT} --image_extension jpg --num_workers {NUM_WORKERS}

For example, if you had 4 cores available and wanted to use the Deep Front-End (recommended) on the "door" dataset, you should run:

python gtsfm/runner/run_scene_optimizer_olssonloader.py --dataset_root tests/data/set1_lund_door --image_extension JPG --config_name deep_front_end.yaml --num_workers 4

(or however many workers you desire).

You can view/monitor the distributed computation using the Dask dashboard.

Currently we require EXIF data embedded into your images (or you can provide ground truth intrinsics in the expected format for an Olsson dataset, or COLMAP-exported text data, etc)

If you would like to compare GTSfM output with COLMAP output, please run:

python gtsfm/runner/run_scene_optimizer_colmaploader.py --config_name {CONFIG_NAME} --images_dir {IMAGES_DIR} --colmap_files_dirpath {COLMAP_FILES_DIRPATH} --image_extension jpg --num_workers {NUM_WORKERS} --max_frame_lookahead {MAX_FRAME_LOOKAHEAD}

where COLMAP_FILES_DIRPATH is a directory where .txt files such as cameras.txt, images.txt, etc have been saved.

To visualize the result using Open3D, run:

python gtsfm/visualization/view_scene.py --rendering_library open3d --rendering_style point

For users that are working with the same dataset repeatedly, we provide functionality to cache front-end results for GTSfM for very fast inference afterwards. For more information, please refer to gtsfm/frontend/cacher/README.md.

Repository Structure

GTSfM is designed in an extremely modular way. Each module can be swapped out with a new one, as long as it implements the API of the module's abstract base class. The code is organized as follows:

  • gtsfm: source code, organized as:
    • averaging
      • rotation: rotation averaging implementations (Shonan, Chordal, etc)
      • translation: translation averaging implementations (1d-SFM, etc)
    • bundle: bundle adjustment implementations
    • common: basic classes used through GTSFM, such as Keypoints, Image, SfmTrack2d, etc
    • data_association: 3d point triangulation (DLT) w/ or w/o RANSAC, from 2d point-tracks
    • densify
    • frontend: SfM front-end code, including:
      • detector: keypoint detector implementations (DoG, etc)
      • descriptor: feature descriptor implementations (SIFT, SuperPoint etc)
      • matcher: descriptor matching implementations (Superglue, etc)
      • verifier: 2d-correspondence verifier implementations (Degensac, OA-Net, etc)
      • cacher: Cache implementations for different stages of the front-end.
    • loader: image data loaders
    • utils: utility functions such as serialization routines and pose comparisons, etc
  • tests: unit tests on every function and module

Contributing

Contributions are always welcome! Please be aware of our contribution guidelines for this project.

Citing this work

Open-source Python implementation:

@misc{GTSFM,
    author = {Ayush Baid and Travis Driver and Fan Jiang and Akshay Krishnan and John Lambert
       and Ren Liu and Aditya Singh and Neha Upadhyay and Aishwarya Venkataramanan
       and Sushmita Warrier and Jon Womack and Jing Wu and Xiaolong Wu and Frank Dellaert},
    title = { {GTSFM}: Georgia Tech Structure from Motion},
    howpublished={\url{https://github.com/borglab/gtsfm}},
    year = {2021}
}

Note: authors are listed in alphabetical order (by last name).

Compiling Additional Verifiers

On Linux, we have made pycolmap's LORANSAC available in pypi. However, on Mac, pycolmap must be built from scratch. See the instructions here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

gtsfm-0.2.0-py3-none-any.whl (241.3 kB view details)

Uploaded Python 3

File details

Details for the file gtsfm-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: gtsfm-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 241.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.0

File hashes

Hashes for gtsfm-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f33cdcdfa747db4cbc3fe676c6c76c57c2ce4fd6b1184319d9d2342ef3314260
MD5 01beb98b0192a2db454d348e1207c899
BLAKE2b-256 fb17349dcc68e65348845ee0e7004f476433184e59cdcc7e5e4976945542947a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page