Neuromeka stereo inference utilities (TensorRT)
Project description
neuromeka_stereo
TensorRT stereo inference utilities packaged for PyPI.
Folder layout
./
src/
neuromeka_stereo/
fs_infer.py
assets/
(empty by default; place plan/onnx here)
FoundationStereo_TRT/
core/
depth_anything_pretrained_models/
Utils.py
examples/
run_realsense_demo.py
pyproject.toml
environment.yml
README.md
1) Install
Local install (recommended for this repo):
python -m pip install -e .
Or from PyPI (if published):
pip install neuromeka_stereo
Large plan files are not tracked in git. See "Assets (manual placement required)" below.
2) TensorRT runtime (plan execution)
python -m pip install --extra-index-url https://pypi.nvidia.com tensorrt-cu12==10.14.1.48.post1
python -m pip install "cuda-python<13" # CUDA 12.x drivers
If your driver is CUDA 13.x, install the latest cuda-python instead.
3) RealSense dependency
run_realsense_demo.py needs pyrealsense2.
Install it for your OS/WSL before running the demo.
python -m pip install pyrealsense2
4) Assets (manual placement required)
Until official download URLs are published, plan/onnx files must be placed
manually. If a required plan file is missing, StereoInference raises an error
with the expected path.
Place plan files here (preferred for this repo):
src/neuromeka_stereo/assets/
Or place them in the cache directory:
~/.cache/neuromeka_stereo
Expected filenames:
foundation_stereo_RTX4060.planfoundation_stereo_RTX5060.planfoundation_stereo_RTX5090.plan
5) Optional: internal auto-download (for approved users only)
If you have an internal HTTP/S3 host, set a base URL to enable auto-download:
export NEUROMEKA_STEREO_ASSET_BASE_URL=https://your-storage.example.com/neuromeka_stereo/
Or set per-asset URLs:
export NEUROMEKA_STEREO_ASSET_URL_FOUNDATION_STEREO_RTX4060_PLAN=https://your-storage.example.com/neuromeka_stereo/foundation_stereo_RTX4060.plan
export NEUROMEKA_STEREO_ASSET_URL_FOUNDATION_STEREO_RTX5060_PLAN=https://your-storage.example.com/neuromeka_stereo/foundation_stereo_RTX5060.plan
export NEUROMEKA_STEREO_ASSET_URL_FOUNDATION_STEREO_RTX5090_PLAN=https://your-storage.example.com/neuromeka_stereo/foundation_stereo_RTX5090.plan
Optional overrides:
export NEUROMEKA_STEREO_DEFAULT_PLAN=foundation_stereo_RTX5090.plan
export NEUROMEKA_STEREO_CACHE_DIR=/path/to/cache
Optional integrity check:
export NEUROMEKA_STEREO_ASSET_SHA256_FOUNDATION_STEREO_RTX4060_PLAN=<sha256>
6) Optional: Conda dev environment
Use this only for development/experiments (not required for runtime inference):
cd /home/user/neuromeka-repo/nrmk_foundation_stereo
conda env create -f environment.yml
conda activate nrmk_fs
7) Usage
from neuromeka_stereo import StereoInference
fs = StereoInference(
trt_path,
fx=fx,
baseline=baseline,
z_far=10.0
)
depth_m = fs.infer(left_ir, right_ir, return_depth=True)
8) Run demo (TensorRT plan)
python examples/run_realsense_demo.py \
--trt_path foundation_stereo_RTX4060.plan \
--width 480 --height 640 --fps 30 --z_far 10
Notes:
- The TensorRT plan is fixed to a specific input size (commonly 480x640). If the plan was built for 480x640, make sure your pipeline resizes to that or rebuild the plan for your target size.
- The plan is also tied to GPU model + TensorRT major version.
Use a GPU-specific plan file (e.g.,
foundation_stereo_RTX4060.plan,foundation_stereo_RTX5060.plan,foundation_stereo_RTX5090.plan). If you move to a different GPU or TRT version, rebuild the plan.
Benchmark (D435 IR, 480x640)
Preliminary results; update for RTX 5060/5090 later.
| GPU | Backend | Input | Latency (sec) | Notes |
|---|---|---|---|---|
| RTX 4060 | TensorRT plan | 480x640 | ~0.42 | TRT plan execution |
| RTX 5060 | TensorRT plan | 480x640 | ~0.30 | TRT plan execution |
| RTX 5090 | TensorRT plan | 480x640 | ~0.07 | TRT plan execution |
| RTX 4060 | Torch | 480x640 | ~1.12 | Torch inference |
Optional: Rebuild plan from ONNX
If you also have an ONNX file on the new machine, build a fresh plan:
Install trtexec (Ubuntu 22.04, CUDA 12.9 repo):
sudo apt-get install -y --allow-downgrades \
libnvinfer-bin=10.14.1.48-1+cuda12.9 \
libnvinfer10=10.14.1.48-1+cuda12.9 \
libnvinfer-plugin10=10.14.1.48-1+cuda12.9 \
libnvonnxparsers10=10.14.1.48-1+cuda12.9 \
libnvinfer-lean10=10.14.1.48-1+cuda12.9 \
libnvinfer-vc-plugin10=10.14.1.48-1+cuda12.9 \
libnvinfer-dispatch10=10.14.1.48-1+cuda12.9
/usr/src/tensorrt/bin/trtexec \
--onnx=./src/neuromeka_stereo/assets/onnx/foundation_stereo_23-51-11_640x480.onnx \
--saveEngine=./src/neuromeka_stereo/assets/foundation_stereo_RTX4060.plan \
--fp16 \
--shapes=left:1x3x480x640,right:1x3x480x640 \
--skipInference
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuromeka_stereo-0.1.0.tar.gz.
File metadata
- Download URL: neuromeka_stereo-0.1.0.tar.gz
- Upload date:
- Size: 27.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
418c77652ff23cfcebabe75792027f9ce0e9b46eb252f51d8cb8b7bc28f40f82
|
|
| MD5 |
74dfc5b15c25420e64bfb3394b737102
|
|
| BLAKE2b-256 |
e66dcd418de89a9fb8070ed74a4f295f1d91c42231e3772506248ab93275dbca
|
File details
Details for the file neuromeka_stereo-0.1.0-py3-none-any.whl.
File metadata
- Download URL: neuromeka_stereo-0.1.0-py3-none-any.whl
- Upload date:
- Size: 33.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9163b0ceb69a1372e49676cb4c5589918a15c941bb8fe743ff9ff5de3cc97d3
|
|
| MD5 |
9fb0465550f7597d161649ef859c194f
|
|
| BLAKE2b-256 |
76ed3ed2cd8db63d3d95f9d55166df579148f4cd7af43bb57e4ecdce755a906e
|