High-performance IPC for tensor data with seamless ROS integration
Project description
Tensor IPC
High-performance and Flexible IPC for tensor data with seamless ROS integration for robotics research.
Overview
tensor-ipc provides efficient shared memory communication for tensor data between processes, with built-in support for ROS topics. It enables zero-copy data sharing using POSIX shared memory and integrates with ROS for distributed communication and sim/real transfer.
Key Features
- 🚀 Zero-Copy Shared Memory: POSIX shared memory with per-frame locking for safe concurrent access
- 🤖 ROS Integration: Built-in ROS producers and consumers with automatic type conversion (custom types are supported through ros2_numpy)
- 🧠 Multi-Backend Support: Native support for NumPy arrays and PyTorch tensors (CPU/CUDA)
- 📦 DDS Notifications: Real-time notifications and synchronization using CycloneDDS for efficient polling
- 🛡️ Type Safety: Automatic validation of tensor shapes, dtypes, and devices
- 🔄 History Management: Configurable history buffers with circular indexing
Installation
git clone https://github.com/danielhou315/tensor-ipc.git
cd tensor-ipc
pip install -e .
- For torch/torch CUDA support, you must install
torchin the same Python environment. - For ROS support, you must install ros2_numpy in the same Python environment.
- Otherwise, only
numpybackend will be available.
Quick Start
Refer to examples/ to see basic usage. Documentation is coming soon (hopefully).
CUDA Support
import torch
from tensor_ipc.core.producer import TensorProducer
# CUDA tensors with IPC sharing
if torch.cuda.is_available():
cuda_tensor = torch.zeros(3, 224, 224, device='cuda:0')
producer = TensorProducer.from_sample("cuda_pool", cuda_tensor)
# Publish CUDA tensor directly
gpu_data = torch.randn(3, 224, 224, device='cuda:0')
producer.put(gpu_data)
Callbacks and Notifications
def on_new_data(data):
print(f"Callback triggered with data shape: {data.shape}")
consumer = TensorConsumer(
metadata,
on_new_data_callback=on_new_data
)
# Callback will be triggered when new data arrives
History Management
# Get last 5 frames in chronological order
history = consumer.get(history_len=5, latest_first=False)
# Get last 3 frames with latest first
recent = consumer.get(history_len=3, latest_first=True)
Architecture
- Backends: Pluggable backends for NumPy, PyTorch CPU, and PyTorch CUDA
- Shared Memory: Numpy/PyTorch backend uses POSIX shared memory with memory-mapped arrays. Torch CUDA backend uses CUDA API through PyTorch.
- Locking: Per-frame reader-writer locks for safe concurrent access
- Notifications: CycloneDDS for real-time progress updates
- ROS Bridge: Automatic conversion between ROS messages and tensor data
API Reference
Core Classes
TensorProducer: Creates and publishes to shared memory poolsTensorConsumer: Subscribes to and reads from shared memory poolsPoolMetadata: Describes pool structure and properties
ROS Extensions
ROSTensorProducer: Publishes shared memory data to ROS topicsROSTensorConsumer: Subscribes to ROS topics and creates shared memory pools
Metadata Creation
MetadataCreator.from_numpy_sample(): Create metadata from NumPy arraysMetadataCreator.from_torch_sample(): Create metadata from PyTorch tensorsMetadataCreator.from_torch_cuda_sample(): Create metadata for CUDA tensorsMetadataCreator.from_sample(): Unifies creation of metadata from sample
Requirements
- Python 3.7+
- NumPy
- CycloneDDS (for DDS notifications)
- Optional: PyTorch (for tensor support)
- Optional: ROS 2 + ros2_numpy (for ROS integration)
License
MIT License
GenAI
This library (especially documentation) is partly written by various LLMs.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tensor_ipc-0.1.0.tar.gz.
File metadata
- Download URL: tensor_ipc-0.1.0.tar.gz
- Upload date:
- Size: 36.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70c8548116c9555aedcbfa00e2057b2d6c52863d8f64c77947bee26fc465f658
|
|
| MD5 |
345c7302d9ae1e43da4d23ee16af7e8b
|
|
| BLAKE2b-256 |
c74a071d8bcddd154bde1e9d54fafdcae570698fbf116ff9f4e3adecf30d45b8
|
File details
Details for the file tensor_ipc-0.1.0-py3-none-any.whl.
File metadata
- Download URL: tensor_ipc-0.1.0-py3-none-any.whl
- Upload date:
- Size: 28.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ed072c3aa755c5c997ed3b6cfe7ed9c66878ffa14e444b56878bde43a7e695dc
|
|
| MD5 |
7468822be424e2c5c0074b44135a0176
|
|
| BLAKE2b-256 |
dff3231948a89a4a49e187d42e46abf4ba72e4fc68f3e57a74b5a73f44a19bce
|