CLI-first QA toolkit for point cloud, trajectory, and 3D perception outputs
Project description
CloudAnalyzer
AI-friendly CLI tool for point cloud analysis and evaluation.
For the full product overview (Japanese), demos, and tutorials, see the repository root README.
Install
From this directory (the Python package root):
pip install cloudanalyzer
# latest from source
git clone https://github.com/rsasaki0109/CloudAnalyzer.git
cd CloudAnalyzer/cloudanalyzer
pip install -e .
# or with Docker
docker build -t ca .
docker run ca info cloud.pcd
Release Sanity Check
python3 -m pip install -e .[dev]
python3 -m build
python3 -m twine check dist/*
Commands
There are 31 CLI subcommands (see ca --help). Summary:
Analysis & Evaluation
| Command | Description |
|---|---|
ca compare |
Compare two point clouds with ICP/GICP registration |
ca diff |
Quick distance stats (no registration) |
ca evaluate |
F1, Chamfer, Hausdorff, AUC evaluation |
ca check |
Config-driven unified QA (cloudanalyzer.yaml) |
ca init-check |
Emit a starter cloudanalyzer.yaml profile |
ca ground-evaluate |
Ground segmentation QA (precision/recall/F1/IoU, optional gates) |
ca traj-evaluate |
ATE, translational RPE, drift evaluation for trajectories |
ca traj-batch |
Batch trajectory benchmark with coverage, gate, and reports |
ca run-evaluate |
Combined map + trajectory QA for one run |
ca run-batch |
Combined map + trajectory benchmark across multiple runs |
ca info |
Point cloud metadata (points, BBox, centroid) |
ca stats |
Detailed statistics (density, spacing distribution) |
ca batch |
Run info on all files in a directory |
Processing
| Command | Description |
|---|---|
ca downsample |
Voxel grid downsampling |
ca sample |
Random point sampling |
ca filter |
Statistical outlier removal |
ca merge |
Merge multiple point clouds |
ca align |
Sequential registration + merge |
ca split |
Split into grid tiles |
ca convert |
Format conversion (pcd/ply/las) |
ca normals |
Normal estimation |
ca crop |
Bounding box crop |
ca pipeline |
filter → downsample → evaluate in one step |
Visualization
| Command | Description |
|---|---|
ca web |
Browser 3D viewer, with optional heatmap, reference overlay, and trajectory run overlay |
ca web-export |
Write a static browser viewer bundle (for demos and sharing) |
ca view |
Interactive 3D viewer |
ca density-map |
2D density heatmap image |
ca heatmap3d |
3D distance heatmap snapshot |
Baseline history
| Command | Description |
|---|---|
ca baseline-save |
Save a QA summary JSON into a rotating history directory |
ca baseline-list |
List baselines saved in a history directory |
ca baseline-decision |
Promote / keep / reject a candidate baseline vs history |
Utility
| Command | Description |
|---|---|
ca version |
Print CLI version |
Usage Examples
# === Evaluation ===
# F1/Chamfer/Hausdorff evaluation with curve plot
ca evaluate source.pcd reference.pcd \
-t 0.05,0.1,0.2,0.5,1.0 --plot f1_curve.png
# Trajectory evaluation with quality gate
ca traj-evaluate estimated.csv reference.csv \
--max-time-delta 0.05 --max-ate 0.5 --max-rpe 0.2 --max-drift 1.0 --min-coverage 0.9 \
--report trajectory_report.html
# report also writes sibling trajectory overlay and error timeline PNGs
# Ignore constant initial translation offset
ca traj-evaluate estimated.csv reference.csv --align-origin
# Fit a rigid transform before scoring
ca traj-evaluate estimated.csv reference.csv --align-rigid
# Batch trajectory benchmark
ca traj-batch runs/ --reference-dir gt/ \
--max-time-delta 0.05 --max-ate 0.5 --max-rpe 0.2 --max-drift 1.0 --min-coverage 0.9 \
--report traj_batch.html
# HTML report adds copyable inspection commands plus pass/failed/low-coverage filters and ATE/RPE/coverage sorting
# low-coverage threshold follows --min-coverage when provided
# Combined run QA: map + trajectory in one report
ca run-evaluate map.pcd map_ref.pcd traj.csv traj_ref.csv \
--min-auc 0.95 --max-chamfer 0.02 \
--max-ate 0.5 --max-rpe 0.2 --max-drift 1.0 --min-coverage 0.9 \
--report run_report.html
# inspection commands include a `ca web ... --trajectory ... --trajectory-reference ...` run viewer
# Combined run batch QA
ca run-batch maps/ \
--map-reference-dir map_refs/ \
--trajectory-dir trajs/ \
--trajectory-reference-dir traj_refs/ \
--min-auc 0.95 --max-chamfer 0.02 \
--max-ate 0.5 --max-rpe 0.2 --max-drift 1.0 --min-coverage 0.9 \
--report run_batch.html
# HTML report adds pass/failed/map-issue/trajectory-issue filters and map/trajectory sorting
# summary and CLI output also split map failures vs trajectory failures
# inspection commands include both a per-run `ca web ...` run viewer and `ca run-evaluate ...` drill-down command
# Full pipeline: filter → downsample → evaluate
ca pipeline noisy.pcd reference.pcd -o clean.pcd -v 0.2
# 3D distance heatmap
ca heatmap3d estimated.pcd reference.pcd -o heatmap.png
# Browser heatmap viewer with reference overlay and threshold filter
ca web estimated.pcd reference.pcd --heatmap
# Browser run viewer: map heatmap + trajectory overlay
ca web map.pcd map_ref.pcd --heatmap \
--trajectory traj.csv --trajectory-reference traj_ref.csv
# paired trajectory があると worst ATE pose と worst RPE segment を viewer 上で強調する
# marker / segment をクリックすると timestamp と error summary を inspection panel に表示する
# click 時は camera も選択箇所へ寄り、Reset View で全景に戻せる
# trajectory error timeline も viewer 内に出て、point click で 3D selection と同期する
# === Compare ===
ca compare source.pcd target.pcd \
--register gicp --json result.json --report report.md \
--snapshot diff.png --threshold 0.1
# Quick diff
ca diff a.pcd b.pcd --threshold 0.05
# === Processing ===
# Split large map into 100m tiles
ca split large_map.pcd -o tiles/ -g 100
# Downsample
ca downsample cloud.pcd -o down.pcd -v 0.05
# Filter outliers
ca filter raw.pcd -o clean.pcd -n 20 -s 2.0
# Align multiple scans
ca align scan1.pcd scan2.pcd scan3.pcd -o aligned.pcd -m gicp
# Batch info
ca batch /path/to/pcds/ -r
# Batch evaluation
ca batch /path/to/results/ --evaluate reference.pcd --format-json | jq '.[].auc'
ca batch /path/to/results/ --evaluate reference.pcd --report batch_report.html
# report includes inspection commands; HTML adds Copy buttons plus count-badged summary rows, quick actions, failed-first / recommended-first sort presets, and pass/failed/pareto/recommended controls
ca batch decoded/ --evaluate reference.pcd --compressed-dir compressed/ --baseline-dir original/
# report also emits a quality-vs-size scatter plot, Pareto candidates, a recommended point, failed-first / recommended-first sort presets, and HTML filters
ca batch /path/to/results/ --evaluate reference.pcd --min-auc 0.95 --max-chamfer 0.02
# Density heatmap
ca density-map cloud.pcd -o density.png -r 1.0 -a z
Global Options
ca --verbose ... # Debug output (stderr)
ca --quiet ... # Suppress non-error output
Output Options
--output-json <path>— Dump result as JSON file--format-json— Print JSON to stdout for piping--plot <path>— F1 curve plot (evaluate only)--report <path>— Markdown/HTML report (batch,traj-evaluate,traj-batch,run-evaluate,run-batch)
# Pipe JSON to jq
ca info cloud.pcd --format-json | jq '.num_points'
ca evaluate a.pcd b.pcd --format-json | jq '.auc'
CI quality gate
Point cloud / trajectory / perception QA is usually driven by ca check and a cloudanalyzer.yaml config (see docs/ci.md and the map quality gate tutorial).
In this GitHub repo, reusable workflows run the same gates in CI. Pin to a tag or SHA when calling them from another repository (not floating @main).
jobs:
qa:
uses: rsasaki0109/CloudAnalyzer/.github/workflows/config-quality-gate.yml@main
with:
config_path: cloudanalyzer.yaml
baseline:
uses: rsasaki0109/CloudAnalyzer/.github/workflows/baseline-gate.yml@main
with:
config_path: cloudanalyzer.yaml
history_dir: qa/history
The repo also ships a manual quality-gate workflow that accepts source/reference paths and thresholds for ad-hoc runs.
Python API
from ca.evaluate import evaluate, plot_f1_curve
from ca.plot import plot_multi_f1, heatmap3d
from ca.pipeline import run_pipeline
from ca.split import split
from ca.info import get_info
from ca.diff import run_diff
from ca.downsample import downsample
from ca.filter import filter_outliers
# Evaluate
result = evaluate("estimated.pcd", "reference.pcd")
print(f"AUC: {result['auc']:.4f}, Chamfer: {result['chamfer_distance']:.4f}")
plot_f1_curve(result, "f1_curve.png")
# Compare multiple results
results = [evaluate(f"v{v}.pcd", "ref.pcd") for v in [0.1, 0.2, 0.5]]
plot_multi_f1(results, ["v0.1", "v0.2", "v0.5"], "comparison.png")
# Pipeline
result = run_pipeline("noisy.pcd", "reference.pcd", "clean.pcd", voxel_size=0.2)
# Split
result = split("large.pcd", "tiles/", grid_size=100.0)
Supported Formats
.pcd(Point Cloud Data).ply(Polygon File Format).las(LiDAR)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cloudanalyzer-0.1.0.tar.gz.
File metadata
- Download URL: cloudanalyzer-0.1.0.tar.gz
- Upload date:
- Size: 197.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44b8e5643d0f28a081a8b54f438550b02998085bcf4b682210e080ed717f5fc8
|
|
| MD5 |
fecb231cdf2f3d67b3f6c268c52852ce
|
|
| BLAKE2b-256 |
2a47e38067590b084f2519aef199480a90e78bb3c8d54a82149b24be575f6db5
|
Provenance
The following attestation bundles were made for cloudanalyzer-0.1.0.tar.gz:
Publisher:
publish-pypi.yml on rsasaki0109/CloudAnalyzer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cloudanalyzer-0.1.0.tar.gz -
Subject digest:
44b8e5643d0f28a081a8b54f438550b02998085bcf4b682210e080ed717f5fc8 - Sigstore transparency entry: 1258951441
- Sigstore integration time:
-
Permalink:
rsasaki0109/CloudAnalyzer@dc2e9d29616e2406f753dac3a7fa005845258878 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/rsasaki0109
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@dc2e9d29616e2406f753dac3a7fa005845258878 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file cloudanalyzer-0.1.0-py3-none-any.whl.
File metadata
- Download URL: cloudanalyzer-0.1.0-py3-none-any.whl
- Upload date:
- Size: 188.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e44397b8550073f007ef0a3f9a43dea01eaaf71ea6d14fcd15eba4deaf19db08
|
|
| MD5 |
45be006b45bb5d2c53ba841515c1dd50
|
|
| BLAKE2b-256 |
cd885853f1bd69c24f5f214520378ec5512edb819448405a16b04cd9aad0b0d2
|
Provenance
The following attestation bundles were made for cloudanalyzer-0.1.0-py3-none-any.whl:
Publisher:
publish-pypi.yml on rsasaki0109/CloudAnalyzer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cloudanalyzer-0.1.0-py3-none-any.whl -
Subject digest:
e44397b8550073f007ef0a3f9a43dea01eaaf71ea6d14fcd15eba4deaf19db08 - Sigstore transparency entry: 1258951442
- Sigstore integration time:
-
Permalink:
rsasaki0109/CloudAnalyzer@dc2e9d29616e2406f753dac3a7fa005845258878 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/rsasaki0109
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@dc2e9d29616e2406f753dac3a7fa005845258878 -
Trigger Event:
workflow_dispatch
-
Statement type: