A comprehensive Python package for robotic manipulator analysis and control
Project description
ManipulaPy
A comprehensive, GPU-accelerated Python package for robotic manipulator analysis, simulation, planning, control, and perception.
Quick Start • Documentation • Examples • Installation • Contributing
🎯 Overview
ManipulaPy is a modern, comprehensive framework that bridges the gap between basic robotics libraries and sophisticated research tools. It provides seamless integration of kinematics, dynamics, control, and perception systems with optional CUDA acceleration for real-time applications.
What's New in 1.3.1
- TRAC-IK Solver (96% success at 200ms): DLS-first strategy with SQP fallback, SVD-robust Jacobian solve, perturbation recovery, and backtracking line search
- Redesigned IK Solvers: All inverse kinematics algorithms overhauled — convergence rates improved from ~70% to 96%+
- Geometric Error Model:
iterative_inverse_kinematicsuses position + orientation geometric error instead of twist-based error - SVD-Robust Jacobian: Condition-number-based damping prevents NaN/Inf in near-singular configurations
- Stagnation Recovery: Automatic perturbation detection breaks solvers out of local minima
- GIL-Aware Threading: Sequential mode (default) avoids Python GIL contention; parallel mode uses 3-worker architecture for DLS+SQP
- Smart Fallback:
smart_inverse_kinematicsauto-falls back torobust_inverse_kinematicson failure - Multi-Start Rewrite:
robust_inverse_kinematicsuses 10 diverse initial-guess strategies for broader workspace coverage
Why ManipulaPy?
🔧 Unified Framework: Complete integration from low-level kinematics to high-level perception
⚡ GPU Accelerated: CUDA kernels for trajectory planning and dynamics computation
🔬 Research Ready: Mathematical rigor with practical implementation
🧩 Modular Design: Use individual components or the complete system
📖 Well Documented: Comprehensive guides with theoretical foundations
🆓 Open Source: AGPL-3.0 licensed for transparency and collaboration
✨ Key Features
🔧 Core Robotics
|
🚀 Advanced Capabilities
|
📋 Feature Availability Matrix
ManipulaPy automatically enables features based on available dependencies. Here's what you can expect:
Core Features (Always Available)
| Feature | CPU Performance | Dependencies | Notes |
|---|---|---|---|
| Kinematics | Excellent | numpy, scipy | Forward/inverse kinematics, Jacobians |
| Basic Dynamics | Good | numpy, scipy | Mass matrix, Coriolis, gravity |
| Control Systems | Excellent | numpy, scipy | PID, computed torque, adaptive |
| URDF Processing | Fast | numpy only (native) | Robot model conversion, NumPy 2.0+ |
| Small Trajectories | Good | numba | N < 1000 points, auto-optimized |
GPU-Accelerated Features (Optional)
| Feature | CPU vs GPU | Requirements | Speedup |
|---|---|---|---|
| Large Trajectories | 40x+ faster | CUDA, cupy | N > 1000 points |
| Batch Processing | 20x+ faster | CUDA, cupy | Multiple trajectories |
| Inverse Dynamics | 100x+ faster | CUDA, cupy | Large datasets |
| Workspace Analysis | 10x+ faster | CUDA, cupy | Monte Carlo sampling |
Vision Features (Requires System Dependencies)
| Feature | Requirements | Common Issues | Solutions |
|---|---|---|---|
| Camera Capture | OpenCV, libGL.so.1 | ImportError: libGL.so.1 | apt install libgl1-mesa-glx |
| Object Detection | ultralytics, internet | YOLO download fails | Check network, manual download |
| Stereo Vision | OpenCV, calibration | Poor depth quality | Camera calibration required |
| 3D Point Clouds | OpenCV, numpy | Memory issues | Reduce point cloud density |
Installation Check Commands
import ManipulaPy
# Quick check - shows ✅/❌ for each feature
ManipulaPy.check_dependencies()
# Detailed system information
ManipulaPy.print_system_info()
# Get missing dependencies install commands
print(ManipulaPy.get_installation_command())
# Check specific feature
try:
ManipulaPy.require_feature('cuda')
print("GPU acceleration ready!")
except ImportError as e:
print(f"GPU not available: {e}")
🚀 Quick Start
Installation
Before installing ManipulaPy, make sure your system has:
-
NVIDIA Drivers & CUDA Toolkit
nvccon yourPATH(e.g. viasudo apt install nvidia-cuda-toolkitor the official NVIDIA CUDA installer).- Verify with:
nvidia-smi # should list your GPU(s) and driver version nvcc --version # should print CUDA version
-
cuDNN
- Download and install cuDNN for your CUDA version from NVIDIA's cuDNN installation guide.
- Verify headers/libs under
/usr/includeand/usr/lib/x86_64-linux-gnu(or your distro’s equivalent).
ManipulaPy attempts to install all dependencies by default for the best user experience. Missing dependencies are handled gracefully - the package will still work with available features.
# One command installs everything (recommended)
pip install ManipulaPy
What gets installed automatically:
- ✅ Core robotics (always): kinematics, dynamics, control, basic trajectory planning
- 🚀 GPU acceleration (if CUDA available): 40x+ speedups for large problems (N > 1000)
- 👁️ Vision features (if system supports): camera capture, object detection, stereo vision
- 🎮 Simulation (if compatible): PyBullet physics simulation and visualization
Check Your Installation
After installation, verify which features are available:
import ManipulaPy
# Quick feature availability check
ManipulaPy.check_dependencies()
# Detailed system information
ManipulaPy.print_system_info()
Alternative Installation Options
# Minimal installation (core features only)
pip install ManipulaPy[minimal]
# Specific CUDA version if auto-detection fails
pip install ManipulaPy[gpu-cuda12] # For CUDA 12.x
# Headless environments (CI/Docker)
pip install ManipulaPy[vision-headless]
# Development environment
pip install ManipulaPy[dev]
Troubleshooting Common Issues
🚀 GPU Acceleration Not Working:
# Check CUDA installation
nvidia-smi
# Verify CUDA toolkit
nvcc --version
# Install specific CUDA version if needed
pip install cupy-cuda11x # or cupy-cuda12x
👁️ Vision Features Not Working:
# Ubuntu/Debian - fix libGL.so.1 error
sudo apt-get install libgl1-mesa-glx libglib2.0-0
# CentOS/RHEL
sudo yum install mesa-libGL libglib2.0
# Test OpenCV
python -c "import cv2; print('OpenCV OK')"
🔧 Verify Installation:
# Test core functionality (always works)
from ManipulaPy.kinematics import SerialManipulator
print("✅ Core features working")
# Test GPU acceleration (if available)
try:
from ManipulaPy.cuda_kernels import check_cuda_availability
if check_cuda_availability():
print("🚀 GPU acceleration available")
else:
print("⚠️ GPU acceleration not available")
except ImportError:
print("⚠️ GPU acceleration not installed")
# Test vision (if available)
try:
from ManipulaPy.vision import Vision
print("👁️ Vision features available")
except ImportError:
print("⚠️ Vision features not available")
What Works Without Additional Setup
✅ Always Available (CPU-only):
- Forward/inverse kinematics and Jacobians
- PID, computed torque, adaptive, and robust control
- URDF processing and robot model conversion
- PyBullet simulation and visualization
- Small trajectory planning (N < 1000 points)
- Singularity analysis and workspace computation
🚀 GPU-Accelerated (optional 40x+ speedup):
- Large trajectory planning (N > 1000 points)
- Batch processing multiple trajectories
- Monte Carlo workspace analysis
- Inverse dynamics for long trajectories
👁️ Requires System Dependencies:
- Vision features: OpenCV, system graphics libraries (libGL.so.1)
- Object detection: YOLO models (auto-downloaded on first use)
- Stereo processing: Camera calibration data
🤖 Included Robot Models
ManipulaPy comes with 25 pre-configured robot models from 8 manufacturers, ready to use out of the box:
Universal Robots (7 models): UR3, UR5, UR10, UR3e, UR5e, UR10e, UR16e Fanuc (7 models): LRMate 200iB, M-16iB, CRX-5iA, CRX-10iA, CRX-10iA/L, CRX-20iA/L, CRX-30iA KUKA (2 models): iiwa7, iiwa14 Kinova (3 models): Gen3, Jaco 6-DOF, Jaco 7-DOF Franka Emika (1 model): Panda UFactory (2 models): xArm6, xArm6 with gripper Robotiq (2 models): 2F-85, 2F-140 grippers ABB (1 model): IRB 2400
from ManipulaPy.ManipulaPy_data import get_robot_urdf, list_robots, print_robot_catalog
# List all available robots
print(list_robots()) # ['ur5', 'panda', 'iiwa14', 'gen3', ...]
# Load a robot
urdf_path = get_robot_urdf('ur5') # or 'panda', 'iiwa14', etc.
robot = URDFToSerialManipulator(urdf_path)
# Print comprehensive catalog
print_robot_catalog() # Shows all robots with specs
See MANIFEST.md for complete robot specifications and usage guide.
30-Second Demo
import numpy as np
from ManipulaPy.urdf_processor import URDFToSerialManipulator
from ManipulaPy.path_planning import OptimizedTrajectoryPlanning
# Load robot model (works with any URDF)
try:
from ManipulaPy.ManipulaPy_data.xarm import urdf_file
except ImportError:
urdf_file = "path/to/your/robot.urdf"
# Initialize robot
urdf_processor = URDFToSerialManipulator(urdf_file)
robot = urdf_processor.serial_manipulator
dynamics = urdf_processor.dynamics
# Forward kinematics (always available)
joint_angles = np.array([0.1, 0.2, -0.3, -0.5, 0.2, 0.1])
end_effector_pose = robot.forward_kinematics(joint_angles)
print(f"End-effector position: {end_effector_pose[:3, 3]}")
# GPU-accelerated trajectory planning (40x+ faster if GPU available)
joint_limits = [(-np.pi, np.pi)] * 6
planner = OptimizedTrajectoryPlanning(robot, urdf_file, dynamics, joint_limits)
trajectory = planner.joint_trajectory(
thetastart=np.zeros(6),
thetaend=joint_angles,
Tf=5.0, N=1000, method=5 # Quintic time scaling
)
print(f"✅ Generated {trajectory['positions'].shape[0]} trajectory points")
# Check what features are available
import ManipulaPy
features = ManipulaPy.get_available_features()
print(f"Available features: {', '.join(features)}")
# Get performance stats
stats = planner.get_performance_stats()
if stats['gpu_calls'] > 0:
speedup = stats.get('speedup_achieved', 0)
print(f"🚀 GPU acceleration achieved {speedup:.1f}x speedup!")
else:
print("🖥️ Using CPU computation")
📚 Core Modules
🔧 Kinematics & Dynamics
Forward & Inverse Kinematics
# Forward kinematics
pose = robot.forward_kinematics(joint_angles, frame="space")
# Inverse kinematics with advanced solver
target_pose = np.eye(4)
target_pose[:3, 3] = [0.5, 0.3, 0.4]
solution, success, iterations = robot.iterative_inverse_kinematics(
T_desired=target_pose,
thetalist0=joint_angles,
eomg=1e-6, ev=1e-6,
max_iterations=10000, # increased default in 1.3.1
damping=0.1,
plot_residuals=True,
)
# Smart IK with auto-fallback (new in 1.3.1)
theta, success, iters = robot.smart_inverse_kinematics(
target_pose,
strategy="workspace_heuristic",
auto_fallback=True, # falls back to robust_ik on failure
)
# Robust IK with multi-start strategies (rewritten in 1.3.1)
theta, success, iters = robot.robust_inverse_kinematics(
target_pose,
eomg=1e-4, ev=1e-4,
)
# TRAC-IK solver (new in 1.3.1) — 96% success at 200ms
theta, success, solve_time = robot.trac_ik(target_pose, timeout=0.2)
# Or with parallel DLS+SQP for harder problems
theta, success, solve_time = robot.trac_ik(
target_pose, timeout=0.5, use_parallel=True
)
Dynamic Analysis
from ManipulaPy.dynamics import ManipulatorDynamics
# Compute dynamics quantities
M = dynamics.mass_matrix(joint_angles)
C = dynamics.velocity_quadratic_forces(joint_angles, joint_velocities)
G = dynamics.gravity_forces(joint_angles, g=[0, 0, -9.81])
# Inverse dynamics: τ = M(q)q̈ + C(q,q̇) + G(q)
torques = dynamics.inverse_dynamics(
joint_angles, joint_velocities, joint_accelerations,
[0, 0, -9.81], np.zeros(6)
)
🛤️ Path Planning & Control
Advanced Trajectory Planning
# GPU-accelerated trajectory planning
planner = OptimizedTrajectoryPlanning(
robot, urdf_file, dynamics, joint_limits,
use_cuda=True, # Enable GPU acceleration
cuda_threshold=200, # Auto-switch threshold
enable_profiling=True
)
# Joint space trajectory
trajectory = planner.joint_trajectory(
thetastart=start_config,
thetaend=end_config,
Tf=5.0, N=1000, method=5 # Quintic time scaling
)
# Cartesian space trajectory
cartesian_traj = planner.cartesian_trajectory(
Xstart=start_pose, Xend=end_pose,
Tf=3.0, N=500, method=3 # Cubic time scaling
)
# Performance monitoring
stats = planner.get_performance_stats()
print(f"GPU usage: {stats['gpu_usage_percent']:.1f}%")
Advanced Control Systems
from ManipulaPy.control import ManipulatorController
controller = ManipulatorController(dynamics)
# Auto-tuned PID control using Ziegler-Nichols
Ku, Tu = 50.0, 0.5 # Ultimate gain and period
Kp, Ki, Kd = controller.ziegler_nichols_tuning(Ku, Tu, kind="PID")
# Computed torque control
control_torque = controller.computed_torque_control(
thetalistd=desired_positions,
dthetalistd=desired_velocities,
ddthetalistd=desired_accelerations,
thetalist=current_positions,
dthetalist=current_velocities,
g=[0, 0, -9.81], dt=0.01,
Kp=Kp, Ki=Ki, Kd=Kd
)
# Adaptive control
adaptive_torque = controller.adaptive_control(
thetalist=current_positions,
dthetalist=current_velocities,
ddthetalist=desired_accelerations,
g=[0, 0, -9.81], Ftip=np.zeros(6),
measurement_error=position_error,
adaptation_gain=0.1
)
🌐 Simulation & Visualization
Real-time PyBullet Simulation
from ManipulaPy.sim import Simulation
# Create simulation environment
sim = Simulation(
urdf_file_path=urdf_file,
joint_limits=joint_limits,
time_step=0.01,
real_time_factor=1.0
)
# Initialize and run
sim.initialize_robot()
sim.initialize_planner_and_controller()
sim.add_joint_parameters() # GUI sliders
# Execute trajectory
final_pose = sim.run_trajectory(trajectory["positions"])
# Manual control with collision detection
sim.manual_control()
Singularity & Workspace Analysis
from ManipulaPy.singularity import Singularity
analyzer = Singularity(robot)
# Singularity detection
is_singular = analyzer.singularity_analysis(joint_angles)
condition_number = analyzer.condition_number(joint_angles)
# Manipulability ellipsoid
analyzer.manipulability_ellipsoid(joint_angles)
# Workspace visualization with GPU acceleration
analyzer.plot_workspace_monte_carlo(
joint_limits=joint_limits,
num_samples=10000
)
👁️ Vision & Perception
Computer Vision Pipeline
from ManipulaPy.vision import Vision
from ManipulaPy.perception import Perception
# Camera configuration
camera_config = {
"name": "main_camera",
"intrinsic_matrix": np.array([[500, 0, 320], [0, 500, 240], [0, 0, 1]]),
"translation": [0, 0, 1.5],
"rotation": [0, -30, 0], # degrees
"fov": 60,
"use_opencv": True, # Real camera
"device_index": 0
}
# Stereo vision setup
left_cam = {**camera_config, "translation": [-0.1, 0, 1.5]}
right_cam = {**camera_config, "translation": [0.1, 0, 1.5]}
vision = Vision(
camera_configs=[camera_config],
stereo_configs=(left_cam, right_cam)
)
# Object detection and clustering
perception = Perception(vision)
obstacles, labels = perception.detect_and_cluster_obstacles(
depth_threshold=3.0,
eps=0.1, min_samples=5
)
# 3D point cloud from stereo
if vision.stereo_enabled:
left_img, _ = vision.capture_image(0)
right_img, _ = vision.capture_image(1)
point_cloud = vision.get_stereo_point_cloud(left_img, right_img)
📊 Performance Features
GPU Acceleration
ManipulaPy includes highly optimized CUDA kernels for performance-critical operations:
from ManipulaPy.cuda_kernels import check_cuda_availability
if check_cuda_availability():
print("🚀 CUDA acceleration available!")
# Automatic GPU/CPU switching based on problem size
planner = OptimizedTrajectoryPlanning(
robot, urdf_file, dynamics, joint_limits,
use_cuda=None, # Auto-detect
cuda_threshold=200, # Switch threshold
memory_pool_size_mb=512 # GPU memory pool
)
# Batch processing for multiple trajectories
batch_trajectories = planner.batch_joint_trajectory(
thetastart_batch=start_configs, # (batch_size, n_joints)
thetaend_batch=end_configs,
Tf=5.0, N=1000, method=5
)
else:
print("CPU mode - install GPU support for acceleration")
Performance Monitoring
# Benchmark different implementations
results = planner.benchmark_performance([
{"N": 1000, "joints": 6, "name": "Medium"},
{"N": 5000, "joints": 6, "name": "Large"},
{"N": 1000, "joints": 12, "name": "Many joints"}
])
for name, result in results.items():
print(f"{name}: {result['total_time']:.3f}s, GPU: {result['used_gpu']}")
📁 Examples & Tutorials
The Examples/ directory contains comprehensive demonstrations that work with different feature combinations:
🎯 Basic Examples (⭐) - CPU Only
These examples work immediately after pip install ManipulaPy with no additional setup required.
| Example | Description | Requirements | Output |
|---|---|---|---|
kinematics_basic_demo.py |
Forward/inverse kinematics | Core only | Manipulability plots |
dynamics_basic_demo.py |
Mass matrix, forces | Core only | Robot analysis |
control_basic_demo.py |
PID, computed torque | Core only | Control comparison |
urdf_processing_basic_demo.py |
URDF conversion | Core + PyBullet | Config analysis |
small_trajectory_demo.py |
CPU trajectory planning | Core only | Path visualization |
🔧 Intermediate Examples (⭐⭐) - Optional GPU
These examples automatically use GPU acceleration if available, gracefully fall back to CPU.
| Example | Description | Auto-Detects | Performance Boost |
|---|---|---|---|
trajectory_planning_intermediate_demo.py |
Large-scale trajectories | GPU available | 40x+ if N>1000 |
batch_processing_intermediate_demo.py |
Multiple trajectory generation | GPU available | 20x+ for batches |
workspace_analysis_intermediate_demo.py |
Monte Carlo workspace | GPU available | 10x+ for sampling |
dynamics_comparison_intermediate_demo.py |
CPU vs GPU dynamics | GPU available | 100x+ for large datasets |
🚀 Advanced Examples (⭐⭐⭐) - Full Features
These examples demonstrate complete integration with all available features.
| Example | Description | Requirements | Notes |
|---|---|---|---|
perception_advanced_demo.py |
Vision + planning | OpenCV, YOLO | Auto-downloads models |
stereo_vision_advanced_demo.py |
3D perception | OpenCV, calibration | Requires camera setup |
real_robot_integration_advanced_demo.py |
Hardware control | All features | Hardware-dependent |
performance_optimization_advanced_demo.py |
Benchmarking suite | GPU recommended | Comprehensive analysis |
🏃♂️ Running Examples with Feature Detection
cd Examples/
# Basic examples - always work
cd basic_examples/
python kinematics_basic_demo.py # ✅ Always works
python dynamics_basic_demo.py # ✅ Always works
# Intermediate examples - auto-detect features
cd ../intermediate_examples/
python trajectory_planning_intermediate_demo.py # 🚀 GPU if available
python batch_processing_intermediate_demo.py --size 1000 # 📊 Scales with hardware
# Advanced examples - check requirements first
cd ../advanced_examples/
python -c "import ManipulaPy; ManipulaPy.check_dependencies()" # Check first
python perception_advanced_demo.py --enable-yolo # 👁️ Needs vision
python stereo_vision_advanced_demo.py --camera-pair 0,1 # 📷 Needs cameras
📊 Example Output Management
Examples automatically adapt their output based on available features:
# Example auto-adaptation pattern
def run_trajectory_example():
import ManipulaPy
# Check what's available
features = ManipulaPy.get_available_features()
if 'cuda' in features:
print("🚀 Using GPU acceleration for large trajectories")
N = 10000 # Large problem size
else:
print("🖥️ Using CPU computation for moderate trajectories")
N = 1000 # Smaller problem size
if 'vision' in features:
print("👁️ Including vision-based obstacle detection")
enable_vision = True
else:
print("⚠️ Vision features not available, using pre-defined obstacles")
enable_vision = False
# Run example with appropriate settings...
🎯 Example Selection Guide
New to ManipulaPy?
# Start here - guaranteed to work
python basic_examples/kinematics_basic_demo.py
Have a GPU?
# Check GPU first
python -c "from ManipulaPy.cuda_kernels import check_cuda_availability; print('GPU:', check_cuda_availability())"
# If True, try these for massive speedups
python intermediate_examples/trajectory_planning_intermediate_demo.py --large
python advanced_examples/performance_optimization_advanced_demo.py
Working with cameras?
# Check vision first
python -c "import ManipulaPy; print('Vision:', 'vision' in ManipulaPy.get_available_features())"
# If True, try perception examples
python intermediate_examples/perception_intermediate_demo.py
python advanced_examples/stereo_vision_advanced_demo.py
Need maximum performance?
# Full system check
python -c "import ManipulaPy; ManipulaPy.check_dependencies(verbose=True)"
# Run comprehensive benchmarks
python advanced_examples/performance_optimization_advanced_demo.py --full-benchmark
🧪 Testing & Validation
Test Suite
# Install test dependencies
pip install ManipulaPy[dev]
# Run all tests
python -m pytest tests/ -v --cov=ManipulaPy
# Test specific modules
python -m pytest tests/test_kinematics.py -v
python -m pytest tests/test_dynamics.py -v
python -m pytest tests/test_control.py -v
python -m pytest tests/test_cuda_kernels.py -v # GPU tests
✅ High-Coverage Modules
| Module | Coverage | Notes |
|---|---|---|
kinematics.py |
98% | Excellent — near full coverage |
dynamics.py |
100% | Fully tested |
perception.py |
92% | Very solid coverage |
vision.py |
83% | Good; some PyBullet paths skipped |
urdf_processor.py |
81% | Strong test coverage |
⚠️ Needs More Testing
| Module | Coverage | Notes |
|---|---|---|
control.py |
81% | Many skipped due to CuPy mock — test with GPU to improve |
sim.py |
77% | Manual control & GUI parts partially tested |
singularity.py |
64% | Workspace plots & CUDA sampling untested |
utils.py |
61% | Some math utils & decorators untested |
🚨 Low/No Coverage
| Module | Coverage | Notes |
|---|---|---|
path_planning.py |
39% | Large gaps in CUDA-accelerated and plotting logic |
cuda_kernels.py |
16% | Most tests skipped — NUMBA_DISABLE_CUDA=1 |
transformations.py |
0% | Not tested at all — consider adding basic SE(3) tests |
🧪 Benchmarking & Validation
ManipulaPy includes a comprehensive benchmarking suite to validate performance and accuracy across different hardware configurations.
Benchmark Suite
Located in the Benchmark/ directory, the suite provides three key tools:
| Benchmark | Purpose | Use Case |
|---|---|---|
performance_benchmark.py |
Comprehensive performance analysis | Full system evaluation and optimization |
accuracy_benchmark.py |
Numerical precision validation | Algorithm correctness verification |
quick_benchmark.py |
Fast development testing | CI/CD integration and regression testing |
Real Performance Results
Latest benchmark on 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs):
=== ManipulaPy Performance Benchmark Results ===
Hardware: 16-core CPU, 31.1GB RAM, NVIDIA GPU (30 SMs, 1024 threads/block)
Test Configuration: Large-scale problems (10K-100K trajectory points)
Overall Performance:
Total Tests: 36 scenarios
Success Rate: 91.7% (33/36) ✅
Overall Speedup: 13.02× average acceleration
CPU Mean Time: 6.88s → GPU Mean Time: 0.53s
🚀 EXCEPTIONAL PERFORMANCE HIGHLIGHTS:
Inverse Dynamics (CUDA Accelerated):
Mean GPU Speedup: 3,624× (3.6K times faster!)
Peak Performance: 5,563× speedup achieved
Real-time Impact: 7s → 0.002s computation
Joint Trajectory Planning:
Mean GPU Speedup: 2.29×
Best Case: 7.96× speedup
Large Problems: Consistent GPU acceleration
Cartesian Trajectories:
Mean GPU Speedup: 1.02× (CPU competitive)
Consistent Performance: ±0.04 variance
Performance Recommendations
🎯 OPTIMAL GPU USE CASES:
- ✅ Inverse dynamics computation (1000×-5000× speedup)
- ✅ Large trajectory generation (>10K points)
- ✅ Batch processing multiple trajectories
- ✅ Real-time control applications
⚠️ CPU-OPTIMAL SCENARIOS:
- Small trajectories (<1K points)
- Cartesian space interpolation
- Single-shot computations
- Development and debugging
Running Benchmarks
# Quick performance check (< 60 seconds)
cd Benchmark/
python quick_benchmark.py
# Comprehensive GPU vs CPU analysis
python performance_benchmark.py --gpu --plot --save-results
# Validate numerical accuracy (configurable tolerances/sample sizes and IK strategy)
python accuracy_benchmark.py --tolerance 1e-4 --fk-tests 200 --ik-tests 50 --ik-strategy workspace_heuristic
accuracy_benchmark.py also accepts --jacobian-tests, --dynamics-tests, and --output-dir to fine-tune the sweep and where artifacts are written.
📖 Documentation
Online Documentation
Quick Reference
# Check installation and dependencies
import ManipulaPy
ManipulaPy.check_dependencies(verbose=True)
# Module overview
print(ManipulaPy.__version__) # Current version
print(ManipulaPy.__all__) # Available modules
# GPU capabilities
from ManipulaPy.cuda_kernels import get_gpu_properties
props = get_gpu_properties()
if props:
print(f"GPU: {props['multiprocessor_count']} SMs")
🤝 Contributing
We love your input! Whether you're reporting a bug, proposing a new feature, or improving our docs, here's how to get started:
1. Report an Issue
Please open a GitHub Issue with:
- A descriptive title
- Steps to reproduce
- Expected vs. actual behavior
- Any relevant logs or screenshots
2. Submit a Pull Request
- Fork this repository and create your branch:
git clone https://github.com/<your-username>/ManipulaPy.git cd ManipulaPy git checkout -b feature/my-feature
- Install and set up the development environment:
pip install -e .[dev] pre-commit install # to run formatters and linters
- Make your changes, then run tests and quality checks:
# Run the full test suite python -m pytest tests/ -v # Lint and format black ManipulaPy/ flake8 ManipulaPy/ mypy ManipulaPy/
- Commit with clear, focused messages and push your branch:
git add . git commit -m "Add awesome new feature" git push origin feature/my-feature
- Open a Pull Request against
maindescribing your changes.
3. Seek Support
- Design questions: GitHub Discussions
- Bug reports: GitHub Issues
- Email: aboelnasr1997@gmail.com
4. Code of Conduct
Please follow our Code of Conduct to keep this community welcoming.
Contribution Areas
- 🐛 Bug Reports: Issues and edge cases
- ✨ New Features: Algorithms and capabilities
- 📚 Documentation: Guides and examples
- 🚀 Performance: CUDA kernels and optimizations
- 🧪 Testing: Test coverage and validation
- 🎨 Visualization: Plotting and animation tools
Guidelines
- Follow PEP 8 style guidelines
- Add comprehensive tests for new features
- Update documentation for API changes
- Include working examples for new functionality
- Maintain backward compatibility when possible
📄 License & Citation
License
ManipulaPy is licensed under the GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later).
Key Points:
- ✅ Free to use for research and education
- ✅ Modify and distribute under same license
- ✅ Commercial use allowed under AGPL terms
- ⚠️ Network services must provide source code
- 📜 See LICENSE for complete terms
Citation
If you use ManipulaPy in your research, please cite:
@software{manipulapy2025,
title={ManipulaPy: A Comprehensive Python Package for Robotic Manipulator Analysis and Control},
author={Mohamed Aboelnasr},
year={2025},
url={https://github.com/boelnasr/ManipulaPy},
version={1.3.1},
license={AGPL-3.0-or-later},
}
Dependencies
All dependencies are AGPL-3.0 compatible:
- Core:
numpy,scipy,matplotlib(BSD) - Vision:
opencv-python(Apache 2.0),ultralytics(AGPL-3.0) - GPU:
cupy(MIT),numba(BSD) - Simulation:
pybullet(Zlib),urchin(MIT)
📞 Support & Community
Getting Help
- 📚 Documentation: manipulapy.readthedocs.io
- 💡 Examples: Check the
Examples/directory - 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📧 Contact: aboelnasr1997@gmail.com
Community
- 🌟 Star the project if you find it useful
- 🍴 Fork to contribute improvements
- 📢 Share with the robotics community
- 📝 Cite in your academic work
Contact Information
Created and maintained by Mohamed Aboelnasr
- 📧 Email: aboelnasr1997@gmail.com
- 🐙 GitHub: @boelnasr
- 🔗 LinkedIn: Connect for collaboration opportunities
🏆 Why Choose ManipulaPy?
🔬 For Researchers
|
👩💻 For Developers
|
🏭 For Industry
|
🤖 ManipulaPy v1.3.1: Professional robotics tools for the Python ecosystem
Empowering robotics research and development with comprehensive, GPU-accelerated tools
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file manipulapy-1.3.1.tar.gz.
File metadata
- Download URL: manipulapy-1.3.1.tar.gz
- Upload date:
- Size: 335.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68b240b75dc3ed960c7767ea72b7de670ebacb4fcbe1ca36df77434f796366fd
|
|
| MD5 |
71ad5b01da47d5bdbe29408e8c8d9294
|
|
| BLAKE2b-256 |
5cce433aaed127ac669ea98a6854e5a7d31d9911d309864cf67525c69cd32ed7
|
File details
Details for the file manipulapy-1.3.1-py3-none-any.whl.
File metadata
- Download URL: manipulapy-1.3.1-py3-none-any.whl
- Upload date:
- Size: 239.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
99ce0ce952b6bc42ee94716466c2939b32126e696fd95c4360ef82ccb4cc8e33
|
|
| MD5 |
af2e58051da4951f8517d93f3a09f7c9
|
|
| BLAKE2b-256 |
360ccd66ddfda2bab5534d43b695700f6d1326f87aa0a8b60ab3bce269be65b9
|