Skip to main content

Agent inference package using ONNX models without Ray or PyTorch dependencies

Project description

Amesa Inference

A lightweight inference package for running Amesa agents using ONNX models without Ray or PyTorch dependencies.

Overview

composabl_inference provides a standalone inference engine for running trained Amesa agents. It uses ONNX Runtime for model inference, making it suitable for deployment scenarios where you want to avoid heavy dependencies like Ray and PyTorch.

Features

  • ONNX-based inference: Uses ONNX Runtime for efficient model inference
  • No Ray or PyTorch dependencies: Lightweight package suitable for production deployment
  • Network management: Supports both local and remote objects (skills, perceptors, controllers)
  • Compatible API: Similar interface to Trainer.package() for easy migration

Installation

pip install amesa-inference

Usage

Basic Inference

from composabl_inference import InferenceEngine
from composabl_core import Agent

# Create inference engine (only license needed for license validation)
engine = InferenceEngine(license="your-license-key")

# Load agent
agent = Agent.load("path/to/agent")
await engine.load_agent(agent)

# Package agent for inference (similar to Trainer.package())
await engine.package()

# Run inference
observation = {...}  # Your observation from the simulator
action = engine.execute(observation)

With Remote Objects

The inference engine supports remote skills, perceptors, and controllers, just like the Trainer:

from composabl_inference import InferenceEngine

# Optional: provide custom config for NetworkMgr (e.g., for remote targets)
config = {
    "target": {
        "local": {
            "address": "localhost:1337",
        },
    },
}

engine = InferenceEngine(license="your-license-key", config=config)
await engine.load_agent("path/to/agent")
await engine.package()

# The skill processor will automatically handle remote objects
action = engine.execute(observation)

Cleanup

# Clean up resources
await engine.close()

Architecture

Components

  1. InferenceEngine: Main entry point for inference operations
  2. NetworkMgr: Manages network connections (non-Ray version)
  3. ONNXInferenceEngine: Handles ONNX model loading and inference
  4. ONNXSkillProcessor: Processes skills using ONNX models instead of PyTorch

Differences from Trainer

  • Uses ONNX Runtime instead of PyTorch for model inference
  • NetworkMgr is not a Ray actor (runs in the same process)
  • No Ray initialization required
  • Lighter weight, suitable for production deployment

Requirements

  • Python >= 3.10
  • composabl-core
  • composabl-api
  • onnxruntime
  • numpy

License

Proprietary and confidential - Copyright (C) Amesa, Inc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amesa_inference-0.1.0.tar.gz (855.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

amesa_inference-0.1.0-cp311-cp311-macosx_15_0_arm64.whl (395.9 kB view details)

Uploaded CPython 3.11macOS 15.0+ ARM64

File details

Details for the file amesa_inference-0.1.0.tar.gz.

File metadata

  • Download URL: amesa_inference-0.1.0.tar.gz
  • Upload date:
  • Size: 855.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.8

File hashes

Hashes for amesa_inference-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8db11b0bba2446f702cd5ec8469bb555dbf1fe5032069567aa48463bb6611f8d
MD5 2eea697871d4596edf549d2b0010abab
BLAKE2b-256 1175b9f9f26fa4eb79b24f0902cde7af43a26fd2df6193ebe8d0277da4feee6f

See more details on using hashes here.

File details

Details for the file amesa_inference-0.1.0-cp311-cp311-macosx_15_0_arm64.whl.

File metadata

File hashes

Hashes for amesa_inference-0.1.0-cp311-cp311-macosx_15_0_arm64.whl
Algorithm Hash digest
SHA256 5a1b54f8bd947dab4f61c6d2821fa5a6676e44eeae4b5454a42123d95389bda9
MD5 513b0ba948799889089a3317e943d8a2
BLAKE2b-256 90aeeff15ee30f591b7e64d66b3ecf016f218530c163cf6cc970e6e9855a6ef4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page