Spatial transformations for robotics and computer vision.
Project description
transform-graph
High-performance spatial transformations and frame graph management for robotics and computer vision.
transform-graph (namespace tgraph) is the foundational mathematical layer for Spatial AI and Robotics in Python. It provides strict-typed handling of SE(3) rigid body transformations and projections.
Target Environment: Python 3.12+, NumPy 2.0+.
Installation
pip install transform-graph
For visualization support (Plotly):
pip install "transform-graph[viz]"
Usage
Basic Transforms
import tgraph.transform as tf
import numpy as np
# 1. Create simple transforms
# Translation: Move 1m in X
translation_x = tf.Translation(x=1.0)
# Rotation: 45 degrees yaw (heading) using aerospace convention
rotation_z = tf.Rotation.from_roll_pitch_yaw(yaw=np.pi/4)
# 2. Compose transforms (Order matters!)
# Move then Rotate
combined_transform = translation_x * rotation_z
# 3. Transform points
points = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
transformed_points = tf.transform_points(combined_transform, points)
print(f"Transformed Points:\n{transformed_points}")
Transform Graph
import tgraph.transform as tf
# Create a transform graph for a robot with a camera
graph = tf.TransformGraph()
# Define frame relationships (Source -> Target)
# Robot Base is 1m in X, 2m in Y relative to World
graph.add_transform('world', 'robot_base', tf.Translation(x=1.0, y=2.0))
# Camera is offset from Robot Base
graph.add_transform('robot_base', 'camera', tf.Transform(
translation=[0.1, 0, 0.5],
rotation=tf.Rotation.from_roll_pitch_yaw(pitch=-0.1).rotation
))
# Query transforms between any frames (auto-composes path)
world_to_camera = graph.get_transform('world', 'camera')
# Inverse traversal works automatically
camera_to_world = graph.get_transform('camera', 'world')
Camera Projections
import tgraph.transform as tf
import numpy as np
# Create a camera with intrinsic parameters
# Create a camera with intrinsic parameters (Strictly Intrinsic)
# Extrinsics (Position/Orientation) must be handled by a separate Transform.
K = np.array([[500, 0, 320], [0, 500, 240], [0, 0, 1]])
camera = tf.CameraProjection(intrinsic_matrix=K, image_size=(640, 480))
# Project 3D points to 2D pixels (points must be in Camera Frame)
points_camera_frame = np.array([[0, 0, 5], [1, 0, 5], [0, 1, 5]])
pixels = tf.project_points(camera, points_camera_frame)
print(f"Projected pixels:\n{pixels}")
# Unproject with known depth (returns points in Camera Frame)
inv_camera = camera.inverse()
depths = np.array([5.0, 5.0, 5.0])
points_recovered = inv_camera.unproject(pixels, depths)
# To handle Extrinsics, compose with a Transform
# Or use TransformGraph for automatic composition.
Orthographic Projections
import tgraph.transform as tf
import numpy as np
# Create a top-down (BEV) orthographic projection
# Maps 3D → 2D pixel coordinates without perspective division
ortho = tf.OrthographicProjection(
axis="top", # "top" | "front" | "side"
u_range=(-50, 50), # column-axis extent (m)
v_range=(-50, 50), # row-axis extent (m)
resolution=0.1, # metres per pixel
)
# Register as a graph edge for unified transform_points API
graph = tf.TransformGraph()
graph.add_transform('ego', 'lidar', tf.Translation(x=2.0, z=1.0))
graph.add_transform('ego', 'bev', ortho)
# Project LiDAR points to BEV pixels — same API as camera projections
points_lidar = np.array([[5.0, 3.0, 0.5], [-2.0, -1.0, 0.0]])
pixels = tf.transform_points(points_lidar, graph, 'lidar', 'bev')[:, :2]
# Direct projection (without graph)
px = tf.transform_points(ortho, points_lidar)
# Inverse: lift pixel coordinates back to 3D (collapsed axis = 0)
pts_3d = tf.transform_points(ortho.inverse(), px)
# Grid metadata
print(f"Grid: {ortho.grid_shape}") # (H, W) in pixels
print(f"Origin pixel: {ortho.origin_pixel}") # (col, row) of world (0,0,0)
| Axis Preset | Drops | Col (u) | Row (v) | Use Case |
|---|---|---|---|---|
"top" |
Z | Y (left→right) | X (forward→back) | Bird's-eye view |
"front" |
X | Y (left→right) | Z (up→down) | Front elevation |
"side" |
Y | X (forward→back) | Z (up→down) | Side elevation |
Transform Composition Rules
The library supports composing transforms with the * operator. The dimensional flow determines what operations are valid:
| Composition | Flow | Result | Use Case |
|---|---|---|---|
Transform * Transform |
3D→3D→3D | Transform |
Chain rigid body transforms |
Projection * Transform |
3D→3D→2D | Projection |
Project from any frame to image |
Transform * InverseProjection |
2D→3D→3D | MatrixTransform |
Unproject and transform rays |
Projection * InverseProjection |
2D→3D→2D | MatrixTransform |
Inter-image mapping |
Key Principles:
-
Projections are one-way: 3D→2D loses depth. Use
InverseProjection.unproject(pixels, depths)when depth is known. -
Type degradation: Composing SE(3) transforms with projections produces
MatrixTransformorProjection, notTransform. -
No Homography type needed: The Fundamental Matrix (
P₂ * T * P₁⁻¹) maps points to epipolar lines, not points. OurMatrixTransformfallback correctly handles these cases.
Epipolar Geometry
The library natively refines epipolar geometry from the graph structure:
# Essential Matrix (E)
E = graph.get_essential_matrix("image1", "image2")
# Fundamental Matrix (F)
F = graph.get_fundamental_matrix("image1", "image2")
# Plane-Induced Homography (H)
H = graph.get_homography("image1", "image2", plane_normal=[0,0,1], plane_distance=1.0)
Documentation & Tutorial
For a comprehensive guide on how to use tgraph, check out the Tutorial. It covers:
- Creating and composing transforms
- Managing complex frame graphs
- 3D spatial and 2D topology visualization
- Camera models and projections
- Serialization
API Documentation
API documentation is auto-generated with pdoc and published to vistralis.org/transform-graph/api.
To build locally:
pdoc --math -t docs/templates -o docs/build tgraph
Development & Quality Control
We use ruff for linting/formatting and pytest for testing.
1. Installation
Install the project in editable mode with all development dependencies:
pip install -e ".[dev,viz]"
2. Linting & Formatting
We adhere to strict Python standards using Ruff.
Check for issues:
ruff check .
Auto-fix linting issues and reformat code:
ruff check . --fix
ruff format .
3. Testing & Coverage
We aim for high code coverage to ensure mathematical rigor.
Run all tests:
pytest
Run tests with coverage report:
pytest --cov=tgraph --cov-report=term-missing
CI/CD Workflow
This project uses GitHub Actions for Continuous Integration.
- Workflow File:
.github/workflows/ci.yml - Triggers:
- Push to
mainbranch. - Pull Request to
mainbranch.
- Push to
- Jobs:
- Build & Test:
- Sets up Python 3.12.
- Installs dependencies (including dev and viz extras).
- Runs the full test suite with coverage reporting.
- Build & Test:
License
Apache 2.0 - Vistralis Labs
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file transform_graph-0.1.2.tar.gz.
File metadata
- Download URL: transform_graph-0.1.2.tar.gz
- Upload date:
- Size: 71.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e36da16ab187a0eb6d5a7ecf7536d2965a8bad75f052fc4578e014d797a9629
|
|
| MD5 |
30ad582d0dc94293936038d82e360d70
|
|
| BLAKE2b-256 |
84d791b1037fef660004709cfd53b276e61a00b97a1dca59d531805e89b6dbd3
|
Provenance
The following attestation bundles were made for transform_graph-0.1.2.tar.gz:
Publisher:
release.yml on vistralis/transform-graph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
transform_graph-0.1.2.tar.gz -
Subject digest:
7e36da16ab187a0eb6d5a7ecf7536d2965a8bad75f052fc4578e014d797a9629 - Sigstore transparency entry: 1076440884
- Sigstore integration time:
-
Permalink:
vistralis/transform-graph@4655986c741d38cd1e7cd64a841219476e1e1acf -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/vistralis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4655986c741d38cd1e7cd64a841219476e1e1acf -
Trigger Event:
push
-
Statement type:
File details
Details for the file transform_graph-0.1.2-py3-none-any.whl.
File metadata
- Download URL: transform_graph-0.1.2-py3-none-any.whl
- Upload date:
- Size: 43.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a38a0540ec8f03ee867be1c1410bdedbe002262afef29b7db6804199dcd750fa
|
|
| MD5 |
f173791f99cfbc36a865c10708fa2a28
|
|
| BLAKE2b-256 |
cf7362907bca52e356323d6c14d7e204439f8536481285e1c65c9f551eaaef4f
|
Provenance
The following attestation bundles were made for transform_graph-0.1.2-py3-none-any.whl:
Publisher:
release.yml on vistralis/transform-graph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
transform_graph-0.1.2-py3-none-any.whl -
Subject digest:
a38a0540ec8f03ee867be1c1410bdedbe002262afef29b7db6804199dcd750fa - Sigstore transparency entry: 1076440901
- Sigstore integration time:
-
Permalink:
vistralis/transform-graph@4655986c741d38cd1e7cd64a841219476e1e1acf -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/vistralis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4655986c741d38cd1e7cd64a841219476e1e1acf -
Trigger Event:
push
-
Statement type: