Skip to main content

Python client utilities for Neuromeka GraspGen Docker server

Project description

neuromeka_grasp

Python client utilities for the GraspGen Docker server (ZeroMQ + pickle RPC). Import and use it directly with from neuromeka_grasp import GraspGeneration.

Installation

pip install neuromeka_grasp

Visualization helpers (draw_grasps_overlay) require OpenCV:

pip install "neuromeka_grasp[viz]"

Requirements

  • Python >= 3.9
  • A running GraspGen Docker server from neuromeka-repo/nrmk_graspgen
  • Server port is fixed to 5558; the client defaults to this port and you usually do not need to change it
  • Core dependencies: numpy, pyzmq
  • Optional visualization dependency: opencv-python-headless (viz extra)

Supported grippers (model presets)

These names map to server-side checkpoints (see nrmk_graspgen/modules/graspgen/app.py):

  • dh_ag_160_95 (default)
  • robotiq_2f_140
  • franka_panda
  • single_suction_cup_30mm

You can also pass a .yml config path via model if the server has other checkpoints.

Quick start (depth + mask)

from neuromeka_grasp import GraspGeneration

client = GraspGeneration(hostname="localhost")
client.init(fx, fy, cx, cy, model="dh_ag_160_95")

resp = client.inference_from_depth_mask(
    depth=depth_np,
    mask=mask_np,
    fx=fx, fy=fy, cx=cx, cy=cy,
    enable_orientation_projection=True,
    approach_axis_source="local_normal_avg",
    enable_roll_projection=True,
    target_roll_direction="auto",
    enable_translation_projection=True,
    translation_axis=-1,
    desired_offset=0.03,
)

if resp["result"] == "SUCCESS":
    grasps = resp["data"]["grasps"]
    scores = resp["data"]["scores"]

Point cloud example

import numpy as np
from neuromeka_grasp import GraspGeneration

client = GraspGeneration(hostname="localhost")
client.init(fx, fy, cx, cy, model="dh_ag_160_95")

object_pc = np.asarray(object_points, dtype=np.float32)  # (N, 3)
scene_pc = np.asarray(scene_points, dtype=np.float32)    # (M, 3)

resp = client.inference_from_point_cloud(
    object_pc=object_pc,
    scene_pc=scene_pc,
    collision_check=True,
    grasp_threshold=0.8,
    num_grasps=200,
)

Visualization (optional)

from neuromeka_grasp import draw_grasps_overlay

overlay = draw_grasps_overlay(
    rgb=rgb_image,
    grasps=grasps,   # (K, 4, 4)
    scores=scores,   # (K,)
    fx=fx, fy=fy, cx=cx, cy=cy,
)

API overview

  • GraspGeneration
    • init(fx, fy, cx, cy, model="dh_ag_160_95")
    • inference_from_point_cloud(...)
    • inference_from_depth_mask(...)
    • point_cloud_outlier_removal(obj_pc, threshold=0.014, k=20)
    • reset(), close()
  • draw_grasps_overlay(...): Project predicted grasps onto an RGB image.

init() reference

Signature init(fx, fy, cx, cy, model="dh_ag_160_95")

Parameters

  • fx, fy: Focal lengths in pixels.
  • cx, cy: Principal point in pixels.
  • model: Gripper preset name (see Supported grippers) or a .yml config path. Config paths are resolved on the server side, so use a path visible to the server container.

Behavior

  • Stores intrinsics in the client for future calls.
  • Initializes the server model; response includes gripper_name, model_path, and intrinsics.

inference_from_depth_mask() reference

Signature inference_from_depth_mask(depth, mask, fx=None, fy=None, cx=None, cy=None, ...)

Core inputs

  • depth: (H, W) depth image. Units are scaled by depth_scale.
  • mask: (H, W) or (H, W, 1) segmentation mask. If target_object_id is set, only that label is used.
  • fx, fy, cx, cy: Camera intrinsics. If omitted, values set by init() are used.

Object/scene point clouds

  • target_object_id: If set, selects a single object label from mask.
  • depth_scale: Multiplier applied to depth before projection.
  • max_object_points: Random downsample limit for object points (None disables).
  • max_scene_points: Random downsample limit for scene points (used only when collision_check=True).

Sampling and filtering

  • collision_check: If True, runs collision filtering with the scene point cloud.
  • collision_threshold: Distance (meters) for collision filtering.
  • grasp_threshold: Minimum score threshold for grasps.
  • num_grasps: Number of grasps to sample before filtering.
  • return_topk: If True, also returns top-k grasps (server-side).
  • topk_num_grasps: Number of top grasps to keep when return_topk=True.
  • min_grasps: Minimum number of grasps to return (server-side).
  • max_tries: Max sampling attempts (server-side).

Orientation projection

  • enable_orientation_projection: Enable orientation constraint projection.
  • approach_axis_index: Gripper-frame approach axis index (0=x, 1=y, 2=z).
  • approach_axis_source: "local_normal", "local_normal_avg", or "global_pca".
  • normal_avg_k: KNN count for averaging normals in "local_normal_avg".
  • target_approach_direction: Target approach direction (3-vector, camera frame).
  • step_strength_orient: Orientation correction strength in [0, 1].
  • use_surface_normal_for_approach: Legacy switch; affects default behavior when approach_axis_source is not set.
  • contact_proxy_offset: Offset (meters) along approach axis for contact proxy.

Translation projection

  • enable_translation_projection: Enable translation constraint projection.
  • translation_axis: PCA axis index (-1 = principal axis).
  • desired_offset: Target offset (meters) along PCA axis.
  • step_strength_trans: Translation correction strength in [0, 1].

Roll projection

  • enable_roll_projection: Enable roll projection around approach axis.
  • jaw_axis_index: Gripper-frame jaw axis index (0=x, 1=y, 2=z).
  • target_roll_direction: Target jaw direction (3-vector, camera frame) or "auto".
  • roll_target_axis: PCA axis index when target_roll_direction="auto".
  • roll_target_axis_sign: Optional sign override (+1 or -1) for the PCA axis.
  • roll_use_local_tangent: Use local tangent (2D PCA) for "auto" roll targets (server support required).
  • step_strength_roll: Roll correction strength in [0, 1].

PCA sign and debug

  • pca_axis_sign_ref: PCA sign reference ("camera_x", "roll_ref", or a 3-vector).
  • projection_debug: Enable projection debug logging on the server.

Axis convention notes

  • The rotation matrix columns represent the gripper-frame axes expressed in the camera frame.
  • approach_axis_index and jaw_axis_index refer to those gripper-frame axes.
  • When using "global_pca", some server builds allow approach_axis_index or roll_target_axis values -10/-11/-12 to select the opposite direction of PCA axis 0/1/2.

PCA sign note

PCA eigenvectors can flip sign across runs. Use pca_axis_sign_ref to keep sign consistent and avoid ambiguous translation/roll directions.

Inputs and shapes

  • depth: HxW float or uint depth image.
  • mask: HxW or HxWx1 integer mask. If target_object_id is set, only that label is used.
  • depth_scale: Multiplier applied to depth before projection.
  • object_pc, scene_pc: (N, 3) and (M, 3) float32 point clouds in camera frame.
  • grasps: (K, 4, 4) homogeneous poses, scores: (K,) in [0, 1].

Projection parameters (summary)

  • Orientation: enable_orientation_projection, approach_axis_index, approach_axis_source (local_normal, local_normal_avg, global_pca), normal_avg_k, contact_proxy_offset
  • Translation: enable_translation_projection, translation_axis, desired_offset
  • Roll: enable_roll_projection, target_roll_direction (supports "auto"), roll_target_axis, roll_target_axis_sign, roll_use_local_tangent
  • Sign control and debug: pca_axis_sign_ref, projection_debug

For detailed parameter semantics, refer to the server README.

Notes

  • inference_from_depth_mask requires intrinsics set via init() or passed directly.
  • Server port is fixed at 5558; the client default matches this and typically should not be changed.
  • PickleClient is synchronous and blocking; set timeouts or retry logic on the server side if needed.
  • Pickle is not secure against untrusted sources. Use only in trusted environments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuromeka_grasp-0.1.1.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuromeka_grasp-0.1.1-py3-none-any.whl (9.3 kB view details)

Uploaded Python 3

File details

Details for the file neuromeka_grasp-0.1.1.tar.gz.

File metadata

  • Download URL: neuromeka_grasp-0.1.1.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for neuromeka_grasp-0.1.1.tar.gz
Algorithm Hash digest
SHA256 2f469ae587ad4f116e855b628f3139becebe988f88de797358442b3b071b1e97
MD5 7d9527b1f9a32746783c8933c5655248
BLAKE2b-256 1c0a1a69290776e00d33635f1c7c4f2e44f310995571074d0208f993f775ea87

See more details on using hashes here.

File details

Details for the file neuromeka_grasp-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for neuromeka_grasp-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 da1092525f913ebb5e8c3992a033800bf39d1719de40ebc674dce1b69b83a1dd
MD5 bcc077cbefb68c585537dac20bc76a89
BLAKE2b-256 b7662285e98b86af31eec95170b41cd3e267665efbc05e59972c543f24613bc8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page