A toolbox of camera, Plücker, and transformation utilities
Project description
core_toolbox_python A lightweight Python toolbox providing utilities for camera intrinsics, Plücker‐line representations, and 3D transformation matrices. This package is organized into three submodules:
- Camera.Intrinsics: Classes for intrinsic camera matrices (Matlab/OpenCV conventions), radial distortion, ray generation, and JSON serialization.
- Plucker.Line: A
Lineclass to represent 3D lines (start/end points or Plücker coordinates), intersection computations, line fitting, and basic plotting utilities. - Transformation.TransformationMatrix: A 4×4 rigid‐body transformation class with support for Euler angles (radians/degrees), quaternions, Bundler‐format I/O, JSON serialization, inversion, chaining, and plotting (matplotlib/Open3D).
- ICP.FastICP: A method that tries to align two point clouds by randomly sampling both point clouds and performing ICP.
- ICP.ICP: A method that tries to align two point clouds by using ICP on their full versions.
- ICP.ICP_wx: A minimalistic UI for visualising two point clouds in and selecting points to support the initial alignment, either FastICP or ICP will be referenced afterwards to perfect the alignment. An Open3D plot is used to highlight the alignment quality.
⚠️ On Linux, wxPython may require a distribution-specific wheel. If installation fails, run:
pip install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-20.04 wxPython
Or let the package auto-repair on first import.
Table of Contents
Features
-
Intrinsics & Distortion
- Create and manipulate camera intrinsic matrices in both Matlab and OpenCV formats.
- Store and serialize radial‐distortion coefficients.
- Compute focal length in millimeters (if pixel size is known).
- Compute perspective (field‐of‐view) angles.
- Generate per‐pixel rays as Plücker‐line objects.
- Save/load intrinsic parameters to/from JSON.
-
Plücker‐Line Representation
- Represent a set of 3D rays or line segments via Plücker coordinates.
- Compute shortest‐distance intersections between two sets of lines.
- Fit a line to a cloud of 3D points (including placeholder methods for RANSAC, to be implemented).
- Compute angles between two lines.
- Basic 3D plotting of lines (matplotlib).
-
Transformation Matrices
- Encapsulate a 4×4 rigid transformation (rotation + translation).
- Get/set translation (
.T) and rotation (.R) as 3×3 matrix. - Get/set Euler angles in radians (
.angles) or degrees (.angles_degree) via SciPy. - Get/set quaternion (
.quaternion) for the rotation. - Apply transformation to point clouds.
- Invert transformations, chain multiple transformations with
@. - Save/load transformations in JSON.
- Save/load Bundler v0.3 camera entries for MeshLab (single‐camera mode).
- Plot coordinate frames in 3D (matplotlib, or Open3D if available).
Requirements
- Python ≥ 3.7
- NumPy
- Matplotlib
- SciPy (especially
scipy.spatial.transform.Rotation) - scikit‐learn (for any future line‐fitting routines)
(All dependencies are declared in pyproject.toml or setup.py under dependencies.)
Installation
-
Clone the repository
git clone https://github.com/yourusername/CTPv.git cd CTPv
-
Build a wheel (PEP 517)
python -m pip install --upgrade pip pip install build python -m build --wheel
A
.whlfile will appear underdist/. -
Install from the local wheel
pip install dist/CTPv-0.1.0-py3-none-any.whl
-
Or install in editable/development mode
pip install -e .
This lets you modify source code and have changes reflected immediately.
Module Overview
Camera.Intrinsics
File: CTPv/Camera/Intrinsics.py
-
Class
RadialDistortion- Holds distortion coefficients
k1, k2, k3. set_from_list([k1, k2, k3]): assign three‐element coefficient list.
- Holds distortion coefficients
-
Class
IntrinsicMatrix-
Attributes:
fx, fy, cx, cy, s(standard pinhole‐camera parameters).width, height(image resolution).pixel_size(in millimeters, e.g. sensor pixel pitch).RadialDistortion: an instance ofRadialDistortion..info: optional metadata (e.g. camera/lens ID).
-
Properties:
.MatlabIntrinsics(getter/setter): 3×3 matrix in Matlab convention (⎡fx s 0; 0 fy 0; cx cy 1⎤)..OpenCVIntrinsics(getter/setter): 3×3 matrix in OpenCV convention (⎡fx 0 cx; 0 fy cy; 0 0 1⎤)..focal_length_mm: returns(fx ⋅ pixel_size, fy ⋅ pixel_size)..PerspectiveAngle(getter/setter): horizontal or vertical field‐of‐view (degrees) based onwidth/heightvsfx,fy.
-
Methods:
.CameraParams2Intrinsics(CameraParams): load intrinsics from an external camera‐parameters object (e.g. if you have aCameraParams.IntrinsicMatrix&CameraParams.ImageSize)..Intrinsics2CameraParams(): return a dictionary{IntrinsicMatrix: […], ImageSize: […], RadialDistortion: …}..ScaleIntrinsics(s): multiplyfx, fy, cx, cy, width, heightby scales..generate_rays() → Line: produce aLineobject where each row corresponds to a 3D ray originating from pixel centers; uses radial‐undistortion (if defined)..save_intrinsics_to_json(filename): write a JSON file containing OpenCV intrinsics, distortion, resolution, pixel size, andinfo..load_intrinsics_from_json(filename): read JSON file and populate intrinsics, distortion,width, height, pixel_size, info.
-
Example (at bottom of file):
if __name__ == "__main__": I = IntrinsicMatrix() I.info = "testCamera" I.fx = I.fy = 1770 I.width, I.height = 1440, 1080 I.cx, I.cy = 685, 492 I.RadialDistortion.set_from_list([-0.5, 0.18, 0]) I.save_intrinsics_to_json("test.json") rays = I.generate_rays() # Plücker‐line set I2 = IntrinsicMatrix().load_intrinsics_from_json("test.json") # … compute intersections, etc.
-
Plucker.Line
File: CTPv/Plucker/Line.py
-
Function
intersection_between_2_lines(L1, L2)-
Computes closest‐point midpoints and shortest distances between each corresponding pair of rays in two
Lineobjects. -
Inputs:
L1,L2: each aLineinstance withPs(start points) andV(direction vectors).
-
Returns:
Points: an(N, 3)array of midpoints between ray i fromL1and ray i fromL2.distances: an(N,)array of shortest distances.
-
-
Class
Line-
Attributes:
Ps:(N, 3)array of start (origin) points of each line/ray.Pe:(N, 3)array of end points (so direction =Pe − Ps).
-
Properties:
.V(getter): normalized direction vectors for each ray ((Pe − Ps)normalized row‐wise)..V(setter): setsPe = Ps + new_direction..Plucker(getter): concatenates directionVand momentU=Ps×(Ps+V)into a(N,6)array..Plucker(setter): given a(N,6)array, recoversPsandVvia cross‐product inversion..Plucker2(alternative Plücker ordering): stores(moment = Ps×Pe ∥ direction=Pe−Ps).
-
Methods:
.GetAngle(): returns the angle (in degrees) between each ray and the world‐Z unit vector..TransformLines(H): applies aTransformationMatrixHto bothPsandPe..plot(limits=None, colors=None, …): wide‐ranging helper that draws as many lines as you like in 3D (within bounds)..PlotLine(colori='g', linewidth=2): simpler per‐line plotting (downsamples if >500 rays)..FindXYZNearestLine(XYZ): given a single 3D point cloudXYZ, returns the index of the ray that is closest..FitLine(XYZ): placeholder for least‐squares fit to 3D points (calls_fitline3d)..FitLineRansac(XYZ, t=10): placeholder for RANSAC line fit (calls_ransac_fit_line)..NormaliseLine(): project all line origins so thatz=0..DistanceLinePoint(XYZ): shortest distance from each line to each query point inXYZ..Lenght(): length of each line segment (‖Pe−Ps‖).@staticmethod FromStartEnd(start, end): build aLinefrom start/end points.@staticmethod FromPlucker(VU): build aLinegiven a(N,6)Plücker array.- Internal helpers:
_normalize_vectors,_is_within_bounds,_downsample,_fitline3d,_ransac_fit_line,_homogeneous_transform, etc. (some are stubs for future extension). .AngleBetweenLines(L1, L2): returns angle (radians, degrees) between twoLineobjects (single‐ray version)..GenerateRay(I, uv): generate rays passing through pixel coordinatesuvusing intrinsicsI.
-
Example (at bottom of file):
if __name__ == "__main__": L = Line() L.Ps = np.array([[1,1,0]]) L.Pe = np.array([[2,1,0]]) print(L.V) # direction vector L.PlotLine() L2 = Line() L2.Ps, L2.Pe = np.array([[0,0,0]]), np.array([[20,20,0]]) _, hoek = L.AngleBetweenLines(L, L2) print("Angle between lines:", hoek)
-
Transformation.TransformationMatrix
File: CTPv/Transformation/TransformationMatrix.py
-
Class
TransformationMatrix-
Internally stores a 4×4 homogeneous transform
self.H(initialized to identity). -
Attributes:
.H: 4×4 NumPy array..info: a two‐element list of arbitrary metadata (e.g. camera ID, timestamp)..units: string indicating units (default"mm").
-
Properties:
.T(getter/setter): get/set the translation vector (3×1)..R(getter/setter): get/set the 3×3 rotation submatrix..angles(getter/setter): Euler angles in radians (XYZ convention) viascipy.spatial.transform.Rotation..angles_degree(getter/setter): Euler angles in degrees..quaternion(getter/setter): quaternion[x, y, z, w]representation of the rotation.
-
Methods:
.transform(points): apply the 4×4 transform to an(N,3)or(3,)array of 3D points, returning transformed(N,3)..invert(): invert the transformation in‐place, swap and invertH, and reverse theinfolist..save_bundler_file(output_file, intrinsics=None): write a Bundler v0.3‐style camera entry (single camera, zero points) to a text file—storing focal length, distortion (set to zero), rotation rows, and translation vector. IfintrinsicsisNone, a default intrinsic matrix is used (example values)..load_bundler_file(filename): read a Bundler file (ignore first three lines), load rotation (3×3) and translation (3×1) back intoH..plot(scale=1.0): visualize this transformation as a 3D coordinate frame (matplotlib)..plot_open3d(scale=1.0): visualize using Open3D’sTriangleMesh.create_coordinate_frame; requiresopen3dinstalled..copy(): return a deep copy of thisTransformationMatrix..load_from_json(filename): readH,info,unitsfrom a JSON file..save_to_json(filename): writeH,info,unitsto JSON.__matmul__(self, other): allow chaining two transformationsT_combined = T1 @ T2(i.e. matrix multiply). The combinedinfois taken as[self.info[0], other.info[-1]]by default.__repr__: printable representation of the 4×4 matrix.
-
Example (at bottom of file):
if __name__ == "__main__": T1 = TransformationMatrix() T1.T = [0, 10, 0] T1.angles_degree = [0, 30, 0] T1.save_bundler_file("test.out") print("T1:\n", T1) print("Quaternion:", T1.quaternion) T1.plot() T2 = T1.copy() T2.invert() T2.plot() T_combined = T1 @ T2 print("Combined:\n", T_combined)
-
ICP.FastICP
File: CTPv/ICP/FastICP.py
-
Class
FastICPAligner-
Performs multi-scale, fast point cloud registration using point-to-plane ICP in Open3D.
-
Designed for large point clouds and coarse-to-fine alignment pipelines.
-
Attributes:
.source_points:(N, 3)NumPy array of source (to transform)..target_points:(M, 3)NumPy array of target (fixed)..normal_estimation_radius:NoneRadius for normal estimation. If None, it's auto-calculated. (float, optional)
-
Methods:
-
.align(threshold=5.0, scales=None, manual_pre_alignment=False)Performs fast, multi-scale ICP. Updates
.H,.rmse,.T_mag.threshold: Distance threshold (in same units as point clouds).scales: List of(voxel_size, max_iter)tuples for coarse-to-fine ICP (default: 3-level pyramid).manual_pre_alignment: IfTrue, allows user to pick 3 manual correspondences in a GUI before starting.
Returns: 4×4
numpy.ndarray— the final transformation matrix. -
.visualize_before_alignment()Shows source and target point clouds before ICP (colored red and blue).
-
.visualize_after_alignment()Shows aligned source + target after
.align()(colored green and blue). -
.print_results()Logs
.H, RMSE, translation magnitude, and convergence status to the console.
-
-
Example:
from CTPv.ICP.FastICP import FastICPAligner # Load or define source/target point clouds aligner = FastICPAligner(source_points, target_points) # Run fast multi-scale alignment H = aligner.align( threshold=5.0, scales=[(0.1, 40), (0.25, 25), (1.0, 15)] ) # Visualize results aligner.visualize_after_alignment() # Print summary aligner.print_results()
-
ICP.ICP
File: CTPv/ICP/ICP.py
-
Class
ICPAlignerPerforms point-to-point ICP alignment between two 3D point clouds using Open3D, with optional manual initialization via a point-picking GUI.
-
Constructor:
ICPAligner(source_points: np.ndarray, target_points: np.ndarray)
source_points:(N, 3)array of source 3D points.target_points:(M, 3)array of target 3D points.
-
Attributes:
.source_points: original(N, 3)source point array..target_points: original(M, 3)target point array..source_pcd: Open3DPointCloudfor source (colored red)..target_pcd: Open3DPointCloudfor target (colored blue)..transformation:TransformationMatrixrepresenting the final transform..reg_p2p: Open3DRegistrationResultfrom the last ICP call..inlier_rmse: RMSE value of inlier correspondences.
-
Methods:
-
.align(threshold=10, max_iteration=2000, manual_pre_alignment=False):Runs ICP registration. If
manual_pre_alignment=True, opens a GUI for selecting corresponding points before refinement.Returns: a
TransformationMatrixobject representing the alignment transform. -
.run_manual_pre_alignment():Launches a two-stage GUI for manually selecting at least 4 corresponding points on the source and target point clouds. Returns a
4×4initial alignment matrix. -
.visualize_before_alignment():Opens a viewer showing the source (red) and target (blue) clouds before alignment.
-
.visualize_after_alignment():Displays the target (blue) and the transformed source (green) after ICP alignment.
-
.print_results():Logs the final transformation's translation vector, Euler angles (degrees), RMSE, and Euclidean distance of the translation.
-
.load_ply(filepath)(static method):Loads a PLY file into a
(N,3)NumPy array usingplyfile. -
._create_pcd_from_points(points, color)(static method):Creates and colors an Open3D point cloud from a
(N,3)array.
-
-
Dependencies:
Requires
open3d,plyfile,numpy, andTransformationMatrix.pip install open3d plyfile numpy
-
Notes:
- If using the GUI for manual alignment, hold Shift + Left Click to pick points, and press Q to finish.
- Works on Linux, macOS, and Windows with GUI support.
- Designed to be robust across Open3D versions by falling back to
VisualizerWithEditing.
-
Example (usage outline):
from CTPv.ICP.ICP import ICPAligner source = ICPAligner.load_ply(\"source.ply\") target = ICPAligner.load_ply(\"target.ply\") icp = ICPAligner(source, target) icp.visualize_before_alignment() T = icp.align(threshold=5.0, max_iteration=1000, manual_pre_alignment=True) icp.print_results() icp.visualize_after_alignment()
-
ICP.ICP_wx
File: CTPv/ICP/ICP_wx.py
- Class
MainFrame(wx.Frame)A wxPython GUI interface to manually select correspondences between a source and target point cloud using 2D projection views.
📋 Overview
This module provides a basic interactive GUI built using wxPython that allows users to select corresponding 2D points from source and target point clouds rendered in orthographic projection. These selected points are then used for computing a rigid transformation using least-squares alignment.
📦 Dependencies
wxPythonfor GUI rendering.Open3Dfor 3D point cloud visualization.matplotlibfor 2D projection views.NumPyandlogging.
🧱 Class: MainFrame(wx.Frame)
A GUI frame with two image panels: one for the source point cloud and one for the target point cloud.
-
Constructor Arguments:
source_points (np.ndarray): Nx3 array of source points.target_points (np.ndarray): Nx3 array of target points.num_points_to_select (int): Number of corresponding points to select (default is 4).
-
UI Components:
- Two canvas panels rendered using
matplotlibfor source and target. - Reset and Confirm buttons.
- Mouse click handlers to collect 2D point selections.
- Two canvas panels rendered using
-
Workflow:
-
Projects both 3D point clouds onto a 2D orthographic plane (top-down).
-
User clicks to select corresponding points in both panels.
-
After confirming, the selected 2D points are projected back to 3D using nearest neighbor search.
-
Extracted corresponding 3D points are stored in:
self.result_source_pointsself.result_target_points
-
-
Notable Methods:
_draw_projection(points, ax, title): Projects and renders 2D view of 3D point cloud._on_click_source / _on_click_target(event): Mouse click handlers._reset_selection(event): Clears all selected points._confirm_selection(event): Finalizes point selection, closes GUI._project_back_to_3d(p2D, cloud3D): Finds nearest 3D point to a selected 2D location.
-
Behavior Upon
app.MainLoop()Exit:-
3D point arrays are stored as:
frame.result_source_points→ shape (N, 3)frame.result_target_points→ shape (N, 3)
-
🧪 Example Usage (see Runner.py)
app = wx.App(False)
frame = MainFrame(target_points=target_pts, source_points=source_pts, num_points_to_select=4)
app.MainLoop()
# After closing window
src_pts = frame.result_source_points
tgt_pts = frame.result_target_points
Usage Examples
Below are some minimal snippets illustrating how to import and use the package once installed.
1. Reading/Writing Intrinsics
from CTPv.Camera.Intrinsics import IntrinsicMatrix, RadialDistortion
# Create an intrinsic matrix
I = IntrinsicMatrix()
I.fx = 1200
I.fy = 1200
I.cx = 640
I.cy = 360
I.width = 1280
I.height = 720
I.pixel_size = 0.0034 # e.g. 3.4 µm
I.RadialDistortion.set_from_list([0.01, -0.001, 0.0])
# Compute OpenCV format
K_opencv = I.OpenCVIntrinsics
print("OpenCV Intrinsics:\n", K_opencv)
# Save to JSON
I.save_intrinsics_to_json("camera_intrinsics.json")
# Load back
I2 = IntrinsicMatrix().load_intrinsics_from_json("camera_intrinsics.json")
print("Loaded fx, fy:", I2.fx, I2.fy)
2. Generating Rays & Line Intersections
from CTPv.Camera.Intrinsics import IntrinsicMatrix
from CTPv.Plucker.Line import intersection_between_2_lines
# Suppose we have two camera poses, project rays, and compute their closest‐point intersections
# Camera 1 intrinsics
I1 = IntrinsicMatrix()
I1.fx = I1.fy = 1000
I1.cx, I1.cy = 320, 240
I1.width, I1.height = 640, 480
I1.pixel_size = 0.0025
# … set radial distortion if needed …
# Camera 2 intrinsics (shifted horizontally by 1 unit)
I2 = IntrinsicMatrix()
I2.fx = I2.fy = 1000
I2.cx, I2.cy = 320, 240
I2.width, I2.height = 640, 480
I2.pixel_size = 0.0025
# Generate full‐image rays from each camera (Plücker‐line sets)
rays1 = I1.generate_rays()
rays2 = I2.generate_rays()
# Compute midpoint & distances between corresponding rays
points_mid, distances = intersection_between_2_lines(rays1, rays2)
print("Mean distance between ray pairs:", distances.mean())
3. Creating & Transforming 3D Geometry
from CTPv.Transformation.TransformationMatrix import TransformationMatrix
import numpy as np
# Define a transformation: translate by [1,2,3], rotate 45° about Z
T = TransformationMatrix()
T.T = [1, 2, 3]
T.angles_degree = [0, 0, 45]
# Transform a set of points
points = np.array([[0,0,0], [1,0,0], [0,1,0]])
points_transformed = T.transform(points)
print("Transformed points:\n", points_transformed)
# Inverse transform
T_inv = T.copy()
T_inv.invert()
restored = T_inv.transform(points_transformed)
print("Restored (should match original):\n", restored)
# Save to JSON
T.save_to_json("transform.json")
T2 = TransformationMatrix().load_from_json("transform.json")
# Chain transformations
T_comb = T @ T2 # (applies T first, then T2)
4. Visualization
import matplotlib.pyplot as plt
from CTPv.Plucker.Line import Line
# Plotting a single ray
L = Line()
L.Ps = np.array([[0, 0, 0]])
L.Pe = np.array([[1, 1, 1]])
L.PlotLine(colori='r')
# Plot a coordinate frame
from CTPv.Transformation.TransformationMatrix import TransformationMatrix
T = TransformationMatrix()
T.T = [0, 0, 0]
T.angles_degree = [30, 45, 60]
T.plot(scale=1.0)
plt.show()
Development & Contributing
-
Clone & install in “editable” mode
git clone https://github.com/yourusername/CTPv.git cd CTPv pip install -e .
-
Make changes on a feature branch
-
Create a new branch off
main(ordevelop).git checkout -b feature/my_update
-
Implement or update functionality as needed (e.g., fill in placeholder methods, add examples, fix bugs).
-
-
Run tests & verify locally
- If you add new functionality, include or update any unit tests.
- Make sure existing examples and import statements continue to work.
-
Tag-based release workflow
-
CI is configured to build wheels only when a Git tag is pushed.
-
Once your branch is reviewed and merged into
main, create a new lightweight or annotated tag following semantic versioning:git checkout main git pull origin main git tag -a vX.Y.Z -m "Release vX.Y.Z" git push origin vX.Y.Z
-
Pushing that tag will trigger the GitHub Actions workflow to build wheels for all platforms and upload them as artifacts.
-
-
Submit a Pull Request
-
Push your feature branch to the remote repository.
git push origin feature/my_update
-
Open a Pull Request against
main, describing your changes. Once approved and merged, follow the tag‐based release step above.
-
-
After a successful tag build
- Download platform‐specific wheel artifacts from the “Artifacts” section in the GitHub Actions run.
- Optionally, publish wheels to PyPI (you can use
twine upload dist/*after downloading and verifying).
Thank you for contributing! If you have questions or need assistance, please open an issue or reach out directly.```
License
This project is distributed under the MIT License. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ctpv-0.2.3.post1-py3-none-any.whl.
File metadata
- Download URL: ctpv-0.2.3.post1-py3-none-any.whl
- Upload date:
- Size: 33.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
820fd543bc548e3041f0fed7c3acf1abe811864a4f618679ba289c5a58f6c237
|
|
| MD5 |
32354dfcfacbfa94e4b8c9e511669617
|
|
| BLAKE2b-256 |
7d1db929fce8d0d1ef7079907f4b31b8a7f1eb3e81fb0f6129db399140a1d24b
|
Provenance
The following attestation bundles were made for ctpv-0.2.3.post1-py3-none-any.whl:
Publisher:
build.yml on InViLabUAntwerp/CTPv
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ctpv-0.2.3.post1-py3-none-any.whl -
Subject digest:
820fd543bc548e3041f0fed7c3acf1abe811864a4f618679ba289c5a58f6c237 - Sigstore transparency entry: 367302434
- Sigstore integration time:
-
Permalink:
InViLabUAntwerp/CTPv@654e99236b5aef78dec2e90824f452b81a3ef2d8 -
Branch / Tag:
refs/tags/v0.2.3-1 - Owner: https://github.com/InViLabUAntwerp
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
build.yml@654e99236b5aef78dec2e90824f452b81a3ef2d8 -
Trigger Event:
push
-
Statement type: