Skip to main content

Agent inference package using ONNX models without Ray or PyTorch dependencies

Project description

Amesa Inference

A lightweight inference package for running Amesa agents using ONNX models without Ray or PyTorch dependencies.

Overview

composabl_inference provides a standalone inference engine for running trained Amesa agents. It uses ONNX Runtime for model inference, making it suitable for deployment scenarios where you want to avoid heavy dependencies like Ray and PyTorch.

Features

  • ONNX-based inference: Uses ONNX Runtime for efficient model inference
  • No Ray or PyTorch dependencies: Lightweight package suitable for production deployment
  • Network management: Supports both local and remote objects (skills, perceptors, controllers)
  • Compatible API: Similar interface to Trainer.package() for easy migration

Installation

pip install amesa-inference

Usage

Basic Inference

from composabl_inference import InferenceEngine
from composabl_core import Agent

# Create inference engine (only license needed for license validation)
engine = InferenceEngine(license="your-license-key")

# Load agent
agent = Agent.load("path/to/agent")
await engine.load_agent(agent)

# Package agent for inference (similar to Trainer.package())
await engine.package()

# Run inference
observation = {...}  # Your observation from the simulator
action = engine.execute(observation)

With Remote Objects

The inference engine supports remote skills, perceptors, and controllers, just like the Trainer:

from composabl_inference import InferenceEngine

# Optional: provide custom config for NetworkMgr (e.g., for remote targets)
config = {
    "target": {
        "local": {
            "address": "localhost:1337",
        },
    },
}

engine = InferenceEngine(license="your-license-key", config=config)
await engine.load_agent("path/to/agent")
await engine.package()

# The skill processor will automatically handle remote objects
action = engine.execute(observation)

Cleanup

# Clean up resources
await engine.close()

Architecture

Components

  1. InferenceEngine: Main entry point for inference operations
  2. NetworkMgr: Manages network connections (non-Ray version)
  3. ONNXInferenceEngine: Handles ONNX model loading and inference
  4. ONNXSkillProcessor: Processes skills using ONNX models instead of PyTorch

Differences from Trainer

  • Uses ONNX Runtime instead of PyTorch for model inference
  • NetworkMgr is not a Ray actor (runs in the same process)
  • No Ray initialization required
  • Lighter weight, suitable for production deployment

Requirements

  • Python >= 3.10
  • composabl-core
  • composabl-api
  • onnxruntime
  • numpy

License

Proprietary and confidential - Copyright (C) Amesa, Inc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_11_0_arm64.whl (398.3 kB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_x86_64.whl (404.1 kB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_universal2.whl (799.1 kB view details)

Uploaded CPython 3.10macOS 10.9+ universal2 (ARM64, x86-64)

File details

Details for the file amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8c7f12b0bcaa55a9089b73cdb43a2e3239d94f4fd23bd4140cb7031dc7181537
MD5 b659d28b6368f32833c6b3a0471d31b2
BLAKE2b-256 9d17c7406021fbc5d2bccdb8e900e7add4ddead656b6ae3fe129766737e8f4a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_11_0_arm64.whl:

Publisher: build-and-publish-package-dev.yaml on Composabl/sdk.composabl.ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 9e56a2b4e5ac259549ed6793113cb4c1ab5669ff0c816450b382926368e9e2de
MD5 e073b6ed9656a80aa3a5ce89855c9249
BLAKE2b-256 fb5977863870647882c14672ccfd004119c43ecf5ac7fff9f5160a5e4d98271a

See more details on using hashes here.

Provenance

The following attestation bundles were made for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_x86_64.whl:

Publisher: build-and-publish-package-dev.yaml on Composabl/sdk.composabl.ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 acfcb6f8e2169ac5e22c774d68bad2ea41137167a3a13d1bb24c3a4069921ee8
MD5 b23dbcab49aa15d8195fc58615a6d1b0
BLAKE2b-256 848cd340d504b9f7e641a0d484a43ca9188e600eda520b6e7e42059d3c57bb8e

See more details on using hashes here.

Provenance

The following attestation bundles were made for amesa_inference_dev-0.20.7.dev2-cp310-cp310-macosx_10_9_universal2.whl:

Publisher: build-and-publish-package-dev.yaml on Composabl/sdk.composabl.ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page