A performant Structure from Motion library for Python
Project description
pyTheia - A Python Structure-from-Motion and Geometric Vision Swiss Knife
pyTheia is based on TheiaSfM. It contains Python bindings for most of the functionalities of TheiaSfM and more.
The library is still in active development and the interfaces are not yet all fixed
With pyTheia you have access to a variety of different camera models, structure-from-motion pipelines and geometric vision algorithms.
Differences to the original library TheiaSfM
pyTheia does not aim at being an end-to-end SfM library. For example, building robust feature detection and matching pipelines is usually application and data specific (e.g. image resolution, runtime, pose priors, invariances, ...). This includes image pre- and postprocessing.
pyTheia is rather a "swiss knife" for quickly prototyping SfM related reconstruction applications without sacrificing perfomance. For example SOTA feature detection & matching, place recognition algorithms are based on deep learning, and easily usable from Python. However, using these algorithms from a C++ library is not always straighforward and especially quick testing and prototyping is cumbersome.
What was removed
Hence, we removed some libaries from the original TheiaSfM:
- SuiteSparse: Optional for ceres, however all GPL related code was removed from src/math/matrix/sparse_cholesky_llt.cc (cholmod -> Eigen::SimplicialLDLT). This will probably be slower on large problems and potentially numerically a bit more unstable.
- OpenImageIO: was used for image in and output and for recitification.
- RapidJSON: Camera intrinsic in and output. Is part of cereal headers anyways.
- RocksDB: Used for saving and loading extracted features efficiently.
Changes to the original TheiaSfM library
- Global SfM algorithms:
- LiGT position solver
- Lagrange Dual rotation estimator
- Hybrid rotation estimator
- Possibility to fix multiple views in Robust_L1L2 solver
- Nonlinear translation solver can fix multiple view or estimate all remaining views in reconstruction
- Camera models
- Double Sphere
- Extended Unified
- Bundle adjustment
- Using a homogeneous representation for scene points
- Extracting covariance information
- Possibility to add a depth prior to 3D points
- Position prior for camera poses (e.g. for GPS or known positions)
- General
- Added timestamp, position_prior_, position_prior_sqrt_information_ variables to View class Eigen::Matrix3d position_prior_sqrt_information_;
- Added inverse_depth_, reference_descriptor, reference_bearing_ variables to Track class
- Added covariance_, depth_prior_, depth_prior_variance_ to Feature class
- Absolute Pose solvers
- SQPnP
Usage Examples
Creating a camera
The following example show you how to create a camera in pyTheia. You can construct it from a pt.sfm.CameraIntrinsicsPrior() or set all parameters using respective functions from pt.sfm.Camera() class.
import pytheia as pt
prior = pt.sfm.CameraIntrinsicsPrior()
prior.focal_length.value = [1000.]
prior.aspect_ratio.value = [1.]
prior.principal_point.value = [500., 500.]
prior.radial_distortion.value = [0., 0., 0., 0]
prior.tangential_distortion.value = [0., 0.]
prior.skew.value = [0]
prior.camera_intrinsics_model_type = 'PINHOLE'
#'PINHOLE', 'DOUBLE_SPHERE', 'EXTENDED_UNIFIED', 'FISHEYE', 'FOV', 'DIVISION_UNDISTORTION'
camera = pt.sfm.Camera()
camera.SetFromCameraIntrinsicsPriors(prior)
# the camera object also carries extrinsics information
camera.Position = [0,0,-2]
camera.SetOrientationFromAngleAxis([0,0,0.1])
# project with intrinsics image to camera coordinates
camera_intrinsics = camera.CameraIntrinsics()
pt2 = [100.,100.]
pt3 = camera_intrinsics.ImageToCameraCoordinates(pt2)
pt2 = camera_intrinsics.CameraToImageCoordinates(pt3)
# project with camera extrinsics
pt3_h = [1,1,2,1] # homogeneous 3d point
depth, pt2 = camera.ProjectPoint(pt3_h)
# get a ray from camera to 3d point in the world frame
ray = camera.PixelToUnitDepthRay(pt2)
pt3_h_ = ray*depth + camera.Position # == pt3_h[:3]
Solve for absolute or relative camera pose
pyTheia integrates a lot of performant geometric vision algorithms. Have a look at the tests
import pytheia as pt
# absolute pose
pose = pt.sfm.PoseFromThreePoints(pts2D, pts3D) # Kneip
pose = pt.sfm.FourPointsPoseFocalLengthRadialDistortion(pts2D, pts3D)
pose = pt.sfm.FourPointPoseAndFocalLength(pts2D, pts3D)
pose = pt.sfm.DlsPnp(pts2D, pts3D)
... and more
# relative pose
pose = pt.sfm.NormalizedEightPointFundamentalMatrix(pts2D, pts2D)
pose = pt.sfm.FourPointHomography(pts2D, pts2D)
pose = pt.sfm.FivePointRelativePose(pts2D, pts2D)
pose = pt.sfm.SevenPointFundamentalMatrix(pts2D, pts2D)
... and more
# ransac estimation
params = pt.solvers.RansacParameters()
params.error_thresh = 0.1
params.max_iterations = 100
params.failure_probability = 0.01
# absolute pose ransac
correspondences2D3D = pt.matching.FeatureCorrespondence2D3D(
pt.sfm.Feature(point1), pt.sfm.Feature(point2))
pnp_type = pt.sfm.PnPType.DLS # pt.sfm.PnPType.SQPnP, pt.sfm.PnPType.KNEIP
success, abs_ori, summary = pt.sfm.EstimateCalibratedAbsolutePose(
params, pt.sfm.RansacType(0), pnp_type, correspondences2D3D)
success, abs_ori, summary = pt.sfm.EstimateAbsolutePoseWithKnownOrientation(
params, pt.sfm.RansacType(0), correspondences2D3D)
... and more
# relative pose ransac
correspondences2D2D = pt.matching.FeatureCorrespondence(
pt.sfm.Feature(point1), pt.sfm.Feature(point2))
success, rel_ori, summary = pt.sfm.EstimateRelativePose(
params, pt.sfm.RansacType(0), correspondences2D2D)
success, rad_homog, summary = pt.sfm.EstimateRadialHomographyMatrix(
params, pt.sfm.RansacType(0), correspondences2D2D)
success, rad_homog, summary = pt.sfm.EstimateFundamentalMatrix(
params, pt.sfm.RansacType(0), correspondences2D2D)
... and more
Bundle Adjustment of views or points
import pytheia as pt
recon = pt.sfm.Reconstruction()
# add some views and points
veiw_id = recon.AddView()
...
track_id = recon.AddTrack()
...
covariance = np.eye(2) * 0.5**2
point = [200,200]
recon.AddObservation(track_id, view_id, pt.sfm.Feature(point, covariance))
# robust BA
opts = pt.sfm.BundleAdjustmentOptions()
opts.robust_loss_width = 1.345
opts.loss_function_type = pt.sfm.LossFunctionType.HUBER
res = BundleAdjustReconstruction(opts, recon)
res = BundleAdjustPartialReconstruction(opts, {view_ids}, {track_ids}, recon)
res = BundleAdjustPartialViewConstant(opts, {var_view_ids}, {const_view_ids}, recon)
# optimize absolute pose on normalized 2D 3D correspondences
res = pt.sfm.OptimizeAbsolutePoseOnNormFeatures(
[pt.sfm.FeatureCorrespondence2D3D], R_init, p_init, opts)
# bundle camera adjust pose only
res = BundleAdjustView(recon, opts, view_id)
res = BundleAdjustViewWithCov(recon, view_id)
res = BundleAdjustViewsWithCov(recon, opts, [view_id1,view_id2])
# optimize structure only
res = BundleAdjustTrack(recon, opts, trackid)
res = BundleAdjustTrackWithCov(recon, opts, [view_id1,view_id2])
res = BundleAdjustTracksWithCov(recon, opts, [view_id1,trackid])
# two view optimization
res = BundleAdjustTwoViewsAngular(recon, [pt.sfm.FeatureCorrespondence], pt.sfm.TwoViewInfo())
Reconstruction example: Global, Hybrid or Incremental SfM using OpenCV feature detection and matching
Have a look at the short example: sfm_pipeline.py
import pytheia as pt
# use your favourite Feature extractor matcher
# can also be any deep stuff
view_graph = pt.sfm.ViewGraph()
recon = pt.sfm.Reconstruction()
track_builder = pt.sfm.TrackBuilder(3, 30)
# ... match some features to find putative correspondences
success, twoview_info, inlier_indices = pt.sfm.EstimateTwoViewInfo(options, prior, prior, correspondences)
# ... get filtered feature correspondences and add them to the reconstruction
correspondences = pt.matching.FeatureCorrespondence(
pt.sfm.Feature(point1), pt.sfm.Feature(point2))
for i in range(len(verified_matches)):
track_builder.AddFeatureCorrespondence(view_id1, correspondences[i].feature1,
view_id2, correspondences[i].feature2)
# ... Build Tracks
track_builder.BuildTracks(recon)
ptions = pt.sfm.ReconstructionEstimatorOptions()
options.num_threads = 4
options.rotation_filtering_max_difference_degrees = 10.0
options.bundle_adjustment_robust_loss_width = 3.0
options.bundle_adjustment_loss_function_type = pt.sfm.LossFunctionType(1)
options.subsample_tracks_for_bundle_adjustment = True
if reconstructiontype == 'global':
options.filter_relative_translations_with_1dsfm = True
reconstruction_estimator = pt.sfm.GlobalReconstructionEstimator(options)
elif reconstructiontype == 'incremental':
reconstruction_estimator = pt.sfm.IncrementalReconstructionEstimator(options)
elif reconstructiontype == 'hybrid':
reconstruction_estimator = pt.sfm.HybridReconstructionEstimator(options)
recon_sum = reconstruction_estimator.Estimate(view_graph, recon)
pt.io.WritePlyFile("test.ply", recon, [255,0,0],2)
pt.io.WriteReconstruction(recon, "reconstruction_file")
Building
This section describes how to build on Ubuntu locally or on WSL2 both with sudo rights. The basic dependency is:
Installing the ceres-solver will also install the neccessary dependencies for pyTheia:
- gflags
- glog
- Eigen
sudo apt install cmake build-essential
# cd to your favourite library folder
mkdir LIBS
cd LIBS
# eigen
git clone https://gitlab.com/libeigen/eigen
cd eigen && git checkout 3.3.9
mkdir -p build && cd build && cmake .. && sudo make install
# libgflags libglog libatlas-base-dev
sudo apt install libgflags-dev libgoogle-glog-dev libatlas-base-dev
# ceres solver
cd LIBS
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver && git checkout 2.0.0 && mkdir build && cd build
cmake .. -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF -DBUILD_BENCHMARKS=OFF
make -j && make install
How to build Python wheels
Local build
Tested on Ubuntu. In your Python >= 3.5 environment of choice run:
sh build_and_install.sh
With Docker
The docker build will actually build manylinux wheels for Linux (Python 3.5-3.9)
docker build -t pytheia:0.1 .
docker run -it pytheia:0.1
Then all the wheels will be inside the container in the folder /home/wheelhouse. Open a second terminal and run
docker ps # this will give you a list of running containers to find the correct CONTAINER_ID
docker cp CONTAINER_ID:/home/wheelhouse /path/to/result/folder/pytheia_wheels
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for pytheia-0.1.6-cp310-cp310-manylinux_2_17_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bac5333cd9f8cfbba4630f02c63ba11fe2629be215f612d4da50c7afd34d6c8c |
|
MD5 | 521e85bb9d63627a8481489a289e8005 |
|
BLAKE2b-256 | 4790fd06e7ef490c55da1b46c9460cf8ef1972d894aef3df6e78889277988a94 |
Hashes for pytheia-0.1.6-cp39-cp39-manylinux_2_17_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f1b8c35c419620a4dc501b53a7bb68ba16185c17c24c3937bffb6697ba727fb0 |
|
MD5 | b80186e01b1505346aa505f8f3fd3417 |
|
BLAKE2b-256 | 8504723f5545018f385ae1ebb4674cf70eeff7f23d80b7b3553edba4048c7f55 |
Hashes for pytheia-0.1.6-cp38-cp38-manylinux_2_17_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a19d99e8ed794a979eb844c84bc814804213d1a5c376621ffa9dc2723363fedc |
|
MD5 | 55fb500b9e0f89da91ee670c1ae410a5 |
|
BLAKE2b-256 | bd6062a7acbdde8860711c3b2f6ca4f953abd7d0f2da407b3476dde5b2ee56ae |
Hashes for pytheia-0.1.6-cp37-cp37m-manylinux_2_17_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba29744580f9e84387676f602d2fb750aba18ba39ee7c2fc59eae2b313c820a0 |
|
MD5 | 7463c56e03043e30223344383b975b57 |
|
BLAKE2b-256 | da92552d4b34cdd5f2c502860c207ba7fc910060e4c1015d467945c7ad7ce81c |
Hashes for pytheia-0.1.6-cp36-cp36m-manylinux_2_17_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 79e51d8248963625cf18f32f361ab9c2b568856cfbdfce5cb45487e5bbd9fa05 |
|
MD5 | e9a5afa9162a463491c96c3703e8b924 |
|
BLAKE2b-256 | 0f5e7e35f1cd55981110cd47278f044ab56c455ec32b6145457b37a9e83c9f7c |