Python client utilities for Neuromeka GraspGen Docker server
Project description
neuromeka_grasp
Python client utilities for the GraspGen Docker server (ZeroMQ + pickle RPC).
Import and use it directly with from neuromeka_grasp import GraspGeneration.
Installation
pip install neuromeka_grasp
Visualization helpers (draw_grasps_overlay) require OpenCV:
pip install "neuromeka_grasp[viz]"
Requirements
- Python >= 3.9
- A running GraspGen Docker server from
neuromeka-repo/nrmk_graspgen - Server port is fixed to
5558; the client defaults to this port and you usually do not need to change it - Core dependencies: numpy, pyzmq
- Optional visualization dependency: opencv-python-headless (
vizextra)
Supported grippers (model presets)
These names map to server-side checkpoints (see nrmk_graspgen/modules/graspgen/app.py):
dh_ag_160_95(default)robotiq_2f_140franka_pandasingle_suction_cup_30mm
You can also pass a .yml config path via model if the server has other checkpoints.
Quick start (depth + mask)
from neuromeka_grasp import GraspGeneration
client = GraspGeneration(hostname="localhost")
client.init(fx, fy, cx, cy, model="dh_ag_160_95")
resp = client.inference_from_depth_mask(
depth=depth_np,
mask=mask_np,
fx=fx, fy=fy, cx=cx, cy=cy,
enable_orientation_projection=True,
approach_axis_source="local_normal_avg",
enable_roll_projection=True,
target_roll_direction="auto",
enable_translation_projection=True,
translation_axis=-1,
desired_offset=0.03,
)
if resp["result"] == "SUCCESS":
grasps = resp["data"]["grasps"]
scores = resp["data"]["scores"]
Point cloud example
import numpy as np
from neuromeka_grasp import GraspGeneration
client = GraspGeneration(hostname="localhost")
client.init(fx, fy, cx, cy, model="dh_ag_160_95")
object_pc = np.asarray(object_points, dtype=np.float32) # (N, 3)
scene_pc = np.asarray(scene_points, dtype=np.float32) # (M, 3)
resp = client.inference_from_point_cloud(
object_pc=object_pc,
scene_pc=scene_pc,
collision_check=True,
grasp_threshold=0.8,
num_grasps=200,
)
Visualization (optional)
from neuromeka_grasp import draw_grasps_overlay
overlay = draw_grasps_overlay(
rgb=rgb_image,
grasps=grasps, # (K, 4, 4)
scores=scores, # (K,)
fx=fx, fy=fy, cx=cx, cy=cy,
)
API overview
GraspGenerationinit(fx, fy, cx, cy, model="dh_ag_160_95")inference_from_point_cloud(...)inference_from_point_cloud_with_aperture(...)inference_from_depth_mask(...)inference_from_depth_mask_with_aperture(...)point_cloud_outlier_removal(obj_pc, threshold=0.014, k=20)reset(),close()
draw_grasps_overlay(...): Project predicted grasps onto an RGB image.
Aperture API examples
from neuromeka_grasp import GraspGeneration
client = GraspGeneration(hostname="localhost")
client.init(fx, fy, cx, cy, model="dh_ag_160_95")
# 1) point cloud + aperture API
resp = client.inference_from_point_cloud_with_aperture(
object_pc=object_points,
max_aperture=0.06,
collision_check=True,
aperture_jaw_axis_index=0,
aperture_margin=0.002,
aperture_depth_band=0.03,
aperture_region_mode="closing_regions",
aperture_unknown_policy="keep",
aperture_chunk_size=256,
max_object_points=5000,
max_scene_points=8192,
)
# 2) depth mask + aperture API
resp2 = client.inference_from_depth_mask_with_aperture(
depth=depth_image,
mask=mask_image,
max_aperture=0.06,
fx=fx, fy=fy, cx=cx, cy=cy,
collision_check=True,
aperture_jaw_axis_index=0,
aperture_margin=0.002,
aperture_depth_band=0.03,
aperture_region_mode="depth_band",
aperture_min_points=12,
)
# image path (object mask generated from ROI or bool mask)
roi_resp = client.inference_from_depth_mask_with_aperture(
point_cloud=object_points, # optional direct point-cloud path
max_aperture=0.06,
aperture_jaw_axis_index=0,
)
init() reference
Signature
init(fx, fy, cx, cy, model="dh_ag_160_95")
Parameters
fx,fy: Focal lengths in pixels.cx,cy: Principal point in pixels.model: Gripper preset name (see Supported grippers) or a.ymlconfig path. Config paths are resolved on the server side, so use a path visible to the server container.
Behavior
- Stores intrinsics in the client for future calls.
- Initializes the server model; response includes
gripper_name,model_path, andintrinsics.
inference_from_depth_mask() reference
Signature
inference_from_depth_mask(depth, mask, fx=None, fy=None, cx=None, cy=None, ...)
Core inputs
depth: (H, W) depth image. Units are scaled bydepth_scale.mask: (H, W) or (H, W, 1) segmentation mask. Iftarget_object_idis set, only that label is used.fx,fy,cx,cy: Camera intrinsics. If omitted, values set byinit()are used.
Object/scene point clouds
target_object_id: If set, selects a single object label frommask.depth_scale: Multiplier applied todepthbefore projection.max_object_points: Random downsample limit for object points (None disables).max_scene_points: Random downsample limit for scene points (used only when collision_check=True).
Sampling and filtering
collision_check: If True, runs collision filtering with the scene point cloud.collision_threshold: Distance (meters) for collision filtering.grasp_threshold: Minimum score threshold for grasps.num_grasps: Number of grasps to sample before filtering.return_topk: If True, also returns top-k grasps (server-side).topk_num_grasps: Number of top grasps to keep whenreturn_topk=True.min_grasps: Minimum number of grasps to return (server-side).max_tries: Max sampling attempts (server-side).
Orientation projection
enable_orientation_projection: Enable orientation constraint projection.approach_axis_index: Gripper-frame approach axis index (0=x, 1=y, 2=z).approach_axis_source:"local_normal","local_normal_avg", or"global_pca".normal_avg_k: KNN count for averaging normals in"local_normal_avg".target_approach_direction: Target approach direction (3-vector, camera frame).step_strength_orient: Orientation correction strength in [0, 1].use_surface_normal_for_approach: Legacy switch; affects default behavior whenapproach_axis_sourceis not set.contact_proxy_offset: Offset (meters) along approach axis for contact proxy.
Translation projection
enable_translation_projection: Enable translation constraint projection.translation_axis: PCA axis index (-1= principal axis).desired_offset: Target offset (meters) along PCA axis.step_strength_trans: Translation correction strength in [0, 1].
Roll projection
enable_roll_projection: Enable roll projection around approach axis.jaw_axis_index: Gripper-frame jaw axis index (0=x, 1=y, 2=z).target_roll_direction: Target jaw direction (3-vector, camera frame) or"auto".roll_target_axis: PCA axis index whentarget_roll_direction="auto".roll_target_axis_sign: Optional sign override (+1 or -1) for the PCA axis.roll_use_local_tangent: Use local tangent (2D PCA) for"auto"roll targets (server support required).step_strength_roll: Roll correction strength in [0, 1].
PCA sign and debug
pca_axis_sign_ref: PCA sign reference ("camera_x","roll_ref", or a 3-vector).projection_debug: Enable projection debug logging on the server.
Axis convention notes
- The rotation matrix columns represent the gripper-frame axes expressed in the camera frame.
approach_axis_indexandjaw_axis_indexrefer to those gripper-frame axes.- When using
"global_pca", some server builds allowapproach_axis_indexorroll_target_axisvalues-10/-11/-12to select the opposite direction of PCA axis 0/1/2.
PCA sign note
PCA eigenvectors can flip sign across runs. Use pca_axis_sign_ref to keep sign consistent
and avoid ambiguous translation/roll directions.
Inputs and shapes
depth: HxW float or uint depth image.mask: HxW or HxWx1 integer mask. Iftarget_object_idis set, only that label is used.depth_scale: Multiplier applied todepthbefore projection.object_pc,scene_pc: (N, 3) and (M, 3) float32 point clouds in camera frame.grasps: (K, 4, 4) homogeneous poses,scores: (K,) in [0, 1].
Projection parameters (summary)
- Orientation:
enable_orientation_projection,approach_axis_index,approach_axis_source(local_normal,local_normal_avg,global_pca),normal_avg_k,contact_proxy_offset - Translation:
enable_translation_projection,translation_axis,desired_offset - Roll:
enable_roll_projection,target_roll_direction(supports"auto"),roll_target_axis,roll_target_axis_sign,roll_use_local_tangent - Sign control and debug:
pca_axis_sign_ref,projection_debug
For detailed parameter semantics, refer to the server README.
Aperture filtering (separate API)
Use the dedicated methods instead of overloading the base inference API:
inference_from_point_cloud_with_aperture(...)inference_from_depth_mask_with_aperture(...)
Key aperture params:
max_aperture(required, meters)aperture_marginaperture_jaw_axis_index(defaults tojaw_axis_indexthen0)aperture_axis_indexis kept for backward compatibility.aperture_region_mode:"closing_regions" | "depth_band" | "all_points"aperture_depth_bandaperture_min_pointsaperture_unknown_policy:"drop" | "keep"aperture_chunk_size(grasp_chunk_sizeis backward-compatible alias)
Response shape:
data["inference"]: base inference outputsdata["aperture"]: required aperture estimates and aperture-filtered resultsdata["combined"]: collision + aperture combined filtering result (when collision enabled)
Notes
inference_from_depth_maskrequires intrinsics set viainit()or passed directly.- Server port is fixed at
5558; the client default matches this and typically should not be changed. PickleClientis synchronous and blocking; set timeouts or retry logic on the server side if needed.- Pickle is not secure against untrusted sources. Use only in trusted environments.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuromeka_grasp-0.1.2.tar.gz.
File metadata
- Download URL: neuromeka_grasp-0.1.2.tar.gz
- Upload date:
- Size: 15.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93365e29f16a1cbfefe8d6950831d052cd4763a05f18bc0e6d3011664720bdfb
|
|
| MD5 |
73426eb0b8b4108c5a2b455d3bdc9cd6
|
|
| BLAKE2b-256 |
50644091730eacf78e7cf91f04a9774fa41fbbd2da2b64e711f6f75914dfd603
|
File details
Details for the file neuromeka_grasp-0.1.2-py3-none-any.whl.
File metadata
- Download URL: neuromeka_grasp-0.1.2-py3-none-any.whl
- Upload date:
- Size: 12.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c12f228a9015917c611287ae1a6a57b9c0953b5220ae4df8bc2fe081651a700
|
|
| MD5 |
507dbbbdb220fff0eaeb43e2f617e0f0
|
|
| BLAKE2b-256 |
d4032fe7fec0037089f3e6060e5fc77f9f5e0ad44c6caa7cb2bddf1d07905ffe
|