Skip to main content

Neuracore Client Library

Project description

Neuracore Python Client

Neuracore is a powerful robotics and machine learning client library for seamless robot data collection, model deployment, and interaction with comprehensive support for custom data types and real-time inference.

Features

  • Easy robot initialization and connection (URDF and MuJoCo MJCF support)
  • Streaming data logging with custom data types
  • Model endpoint management (local and remote)
  • Real-time policy inference and deployment
  • Flexible dataset creation and synchronization
  • Open source training infrastructure with Hydra configuration
  • Custom algorithm development and upload
  • Multi-modal data support (joint positions, velocities, RGB images, language, custom data, and more)

Installation

pip install neuracore

Note: for faster video decoding, installing ffmpeg via sudo apt-get install ffmpeg (for Linux) is recommended.

For training and ML development:

pip install neuracore[ml]

For MuJoCo MJCF support:

pip install neuracore[mjcf]

Quick Start

Ensure you have an account at neuracore.app

Authentication

import neuracore as nc

# This will save your API key locally
nc.login()

Robot Connection

# Connect to a robot with URDF
nc.connect_robot(
    robot_name="MyRobot", 
    urdf_path="/path/to/robot.urdf",
    overwrite=False  # Set to True to overwrite existing robot config
)

# Or connect using MuJoCo MJCF
nc.connect_robot(
    robot_name="MyRobot", 
    mjcf_path="/path/to/robot.xml"
)

Data Collection and Logging

Basic Data Logging

import time

# Create a dataset for recording
nc.create_dataset(
    name="My Robot Dataset",
    description="Example dataset with multiple data types"
)

# Start recording
nc.start_recording()

# Log various data types with timestamps
t = time.time()
nc.log_joint_positions("right_arm", {'joint1': 0.5, 'joint2': -0.3}, timestamp=t)
nc.log_joint_velocities("right_arm", {'joint1': 0.1, 'joint2': -0.05}, timestamp=t)
nc.log_joint_target_positions("right_arm", {'joint1': 0.6, 'joint2': -0.2}, timestamp=t)

# Log camera data
nc.log_rgb("top_camera", image_array, timestamp=t)

# Log language instructions
nc.log_language("instruction", "Pick up the red cube", timestamp=t)

# Log custom data
custom_sensor_data = [1.2, 3.4, 5.6]
nc.log_custom_data("force_sensor", custom_sensor_data, timestamp=t)

# Stop recording
nc.stop_recording()

Live Data Control

# Stop live data streaming (saves bandwidth, doesn't affect recording)
nc.stop_live_data(robot_name="MyRobot", instance=0)

# Resume live data streaming
nc.start_live_data(robot_name="MyRobot", instance=0)

Dataset Access and Visualization

# Load a dataset
dataset = nc.get_dataset("My Robot Dataset")

# Synchronize data types at a specific frequency
from neuracore_types import DataType

synced_dataset = dataset.synchronize(
    frequency=10,  # Hz
    data_types=[DataType.JOINT_POSITIONS, DataType.RGB_IMAGES, DataType.LANGUAGE]
)

print(f"Dataset has {len(synced_dataset)} episodes")

# Access synchronized data
for episode in synced_dataset[:5]:  # First 5 episodes
    for step in episode:
        joint_pos = step.joint_positions
        rgb_images = step.rgb_images
        language = step.language
        # Process your data

Model Inference

Local Model Inference

# Load a trained model locally
policy = nc.policy(train_run_name="MyTrainingJob")

# Or load from file path
# policy = nc.policy(model_file="/path/to/model.nc.zip")

# Set specific checkpoint (optional, defaults to last epoch)
policy.set_checkpoint(epoch=-1)

# Predict actions
predicted_sync_points = policy.predict(timeout=5, robot_name="MyRobot")
joint_target_positions = [sp.joint_target_positions for sp in predicted_sync_points]
actions = [jtp.numpy() for jtp in joint_target_positions if jtp is not None]

Remote Model Inference

# Connect to a remote endpoint
try:
    policy = nc.policy_remote_server("MyEndpointName")
    predicted_sync_points = policy.predict(timeout=5, robot_name="MyRobot")
    # Process predictions...
except nc.EndpointError:
    print("Endpoint not available. Please start it at neuracore.app/dashboard/endpoints")

Local Server Deployment

# Connect to a local policy server
policy = nc.policy_local_server(train_run_name="MyTrainingJob")

Command Line Tools

Neuracore provides several command-line utilities:

Authentication

# Interactive login to save API key
nc-login

Use the --email and --password option if you wish to login non-interactively.

Organization Management

# Select your current organization
nc-select-org

Use the --org-name option if you wish to select the org non-interactively.

Server Operations

# Launch local policy server for inference
nc-launch-server --job_id <job_id> --org_id <org_id> [--host <host>] [--port <port>]

# Example:
nc-launch-server --job_id my_job_123 --org_id my_org_456 --host 0.0.0.0 --port 8080

Parameters:

  • --job_id: Required. The job ID to run
  • --org_id: Required. Your organization ID
  • --host: Optional. Host address (default: 0.0.0.0)
  • --port: Optional. Port number (default: 8080)

Algorithm Validation

# Validate custom algorithms before upload
neuracore-validate /path/to/your/algorithm

Open Source Training

Neuracore includes a comprehensive training infrastructure with Hydra configuration management for local model development.

Training Structure

neuracore/
  ml/
    train.py              # Main training script
    config/               # Hydra configuration files
      config.yaml         # Main configuration
      algorithm/          # Algorithm-specific configs
        diffusion_policy.yaml
        act.yaml
        simple_vla.yaml
        cnnmlp.yaml
        ...
      training/           # Training configurations
      dataset/            # Dataset configurations
    algorithms/           # Built-in algorithms
    datasets/             # Dataset implementations
    trainers/             # Distributed training utilities
    utils/                # Training utilities

Training Examples

# Basic training with Diffusion Policy
python -m neuracore.ml.train algorithm=diffusion_policy dataset_name="my_dataset"

# Train ACT with custom hyperparameters
python -m neuracore.ml.train algorithm=act algorithm.lr=5e-4 algorithm.hidden_dim=1024 dataset_name="my_dataset"

# Auto-tune batch size
python -m neuracore.ml.train algorithm=diffusion_policy batch_size=auto dataset_name="my_dataset"

# Hyperparameter sweeps
python -m neuracore.ml.train --multirun algorithm=cnnmlp algorithm.lr=1e-4,5e-4,1e-3 algorithm.hidden_dim=256,512,1024 dataset_name="my_dataset"

# Multi-modal training with images and language
python -m neuracore.ml.train algorithm=simple_vla dataset_name="my_multimodal_dataset" input_robot_data_spec='["JOINT_POSITIONS","RGB_IMAGE","LANGUAGE"]'

Configuration Management

# config/config.yaml
defaults:
  - algorithm: diffusion_policy
  - training: default
  - dataset: default

# Core parameters
epochs: 100
batch_size: "auto"
seed: 42

# Multi-modal data support
input_robot_data_spec:
  - "JOINT_POSITIONS"
  - "RGB_IMAGE"
  - "LANGUAGE"
output_robot_data_spec:
  - "JOINT_TARGET_POSITIONS"

Training Features

  • Distributed Training: Multi-GPU support with PyTorch DDP
  • Automatic Batch Size Tuning: Find optimal batch sizes automatically
  • Memory Monitoring: Prevent OOM errors with built-in monitoring
  • TensorBoard Integration: Comprehensive logging and visualization
  • Checkpoint Management: Automatic saving and resuming
  • Cloud Integration: Seamless integration with Neuracore SaaS platform
  • Multi-modal Support: Images, joint states, language, and custom data types

Custom Algorithm Development

Create custom algorithms by extending the NeuracoreModel class:

import torch
from neuracore.ml import NeuracoreModel, BatchedInferenceSamples, BatchedTrainingSamples, BatchedTrainingOutputs
from neuracore_types import DataType, ModelInitDescription, ModelPrediction

class MyCustomAlgorithm(NeuracoreModel):
    def __init__(self, model_init_description: ModelInitDescription, **kwargs):
        super().__init__(model_init_description)
        # Your model initialization here
        
    def forward(self, batch: BatchedInferenceSamples) -> ModelPrediction:
        # Your inference logic
        pass
        
    def training_step(self, batch: BatchedTrainingSamples) -> BatchedTrainingOutputs:
        # Your training logic
        pass
        
    def configure_optimizers(self) -> list[torch.optim.Optimizer]:
        # Return list of optimizers
        pass
        
    @staticmethod
    def get_supported_input_data_types() -> list[DataType]:
        return [DataType.JOINT_POSITIONS, DataType.RGB_IMAGES]
        
    @staticmethod
    def get_supported_output_data_types() -> list[DataType]:
        return [DataType.JOINT_TARGET_POSITIONS]

Algorithm Upload Options

  1. Open Source Contribution: Submit a PR to the Neuracore repository
  2. Private Upload: Upload directly at neuracore.app
    • Single Python file with your NeuracoreModel class
    • ZIP file containing your algorithm directory with requirements.txt

Environment Variables

Configure Neuracore behavior with environment variables (case insensitive, prefixed with NEURACORE_):

Variable Function Valid Values Default
NEURACORE_REMOTE_RECORDING_TRIGGER_ENABLED Allow remote recording triggers true/false true
NEURACORE_PROVIDE_LIVE_DATA Enable live data streaming from this node true/false true
NEURACORE_CONSUME_LIVE_DATA Enable live data consumption for inference true/false true
NEURACORE_API_URL Base URL for Neuracore platform URL string https://api.neuracore.app/api
NEURACORE_API_KEY An override to the api-key to access the neuracore nrc_XXXX Configured with the nc-login command
NEURACORE_ORG_ID An override to select the organization to use. A valid UUID Configured with the nc-select-org command
TMPDIR Specifies a directory used for storing temporary files Filepath An appropriate folder for your system

Performance Considerations

Bandwidth Optimization

  • Use appropriate camera resolutions
  • Log only necessary joint states
  • Maintain consistent joint combinations (max 50 concurrent streams)
  • Consider hardware-accelerated H.264 encoding for video

Processing Optimization

  • Enable hardware acceleration for video encoding
  • Limit simultaneous dashboard viewers during recording
  • Distribute data collection across multiple machines when needed
  • Use nc.stop_live_data() when live monitoring isn't required

Documentation

Development Setup

git clone https://github.com/neuracoreai/neuracore
cd neuracore
pip install -e .[dev,ml]

Testing

export NEURACORE_API_URL=http://localhost:8000/api
pytest tests/

If testing on Mac, you may need to set:

export PYTORCH_ENABLE_MPS_FALLBACK=1

Contributing

We welcome contributions! Please see our contributing guidelines and submit pull requests for:

  • New algorithms and models
  • Performance improvements
  • Documentation enhancements
  • Bug fixes and feature requests

Project details


Release history Release notifications | RSS feed

This version

7.8.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuracore-7.8.0.tar.gz (184.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuracore-7.8.0-py3-none-any.whl (235.4 kB view details)

Uploaded Python 3

File details

Details for the file neuracore-7.8.0.tar.gz.

File metadata

  • Download URL: neuracore-7.8.0.tar.gz
  • Upload date:
  • Size: 184.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for neuracore-7.8.0.tar.gz
Algorithm Hash digest
SHA256 8797f216817bc7ef67483feff1d08eb8b2283e16932f78943829051ae3a4c2d4
MD5 e1a60836c26a5bb6794a46ae1a423b81
BLAKE2b-256 955639743fedf6e1a020f05130faede616b1e3e440be3eaa1a639b1870106f88

See more details on using hashes here.

File details

Details for the file neuracore-7.8.0-py3-none-any.whl.

File metadata

  • Download URL: neuracore-7.8.0-py3-none-any.whl
  • Upload date:
  • Size: 235.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for neuracore-7.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 19372dd4a0fd7787b61c8604fc59b17b2c68a7245469e8fa0dab04655ea1d509
MD5 424e28c0892b6641fde15ea1b26a1064
BLAKE2b-256 76d3a447d6a0924d0805160f34214a154eee15e7d33e5aeb178e4900c6b1873f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page