Skip to main content

OmniBioAI Tool Execution Service (TES): secure, validated execution of bioinformatics tools on local/HPC/remote backends

Project description

OmniBioAI Tool Execution Service (TES)

omnibioai-tool-exec is a standalone Tool Execution Service (TES) that enables secure, validated, and reproducible execution of bioinformatics tools across heterogeneous compute environments (local, HPC, cloud, or remote servers).

It is designed to be used by agentic AI systems (such as OmniBioAI) where an LLM interprets user intent, but never executes tools directly. Instead, all execution is delegated to TES via a strict API contract.


Why TES exists

LLMs are good at understanding intent, but bad at safely running tools.

TES solves this by acting as a control plane between AI agents and real compute:

  • Validates tool inputs and resource requests
  • Matches tools to compatible execution environments
  • Executes tools in a controlled adapter layer
  • Tracks run lifecycle (submitted → running → completed/failed)
  • Returns structured, machine-readable results

This prevents:

  • Arbitrary command execution
  • Hallucinated tool outputs
  • Tight coupling between UI, AI, and infrastructure

Core Concepts

Tool

A tool is a declarative definition of:

  • Inputs schema (JSON Schema)
  • Outputs schema
  • Requirements (e.g. BLAST database, reference genome)

Example:

tool_id: blastn
inputs_schema:
  required: [sequence, database]

Server

A server represents an execution environment:

  • Local machine
  • HPC cluster
  • Kubernetes
  • Remote API-backed service

Each server advertises:

  • Which tools it supports
  • Available resources
  • Storage constraints
  • Runtime policies

Adapter

An adapter translates a validated run request into a real execution:

  • Local process
  • Container
  • Job scheduler
  • HTTP call

Adapters are pluggable and isolated from the API layer.


Architecture Overview

User / Chat UI
      |
      v
LLM / Agent (intent only)
      |
      v
omnibioai-tool-exec (TES)
  - validate
  - route
  - execute
      |
      v
Execution Environment
(local / HPC / cloud)

Key rule: LLMs never execute tools. TES does.


API Overview

TES exposes a REST API (OpenAPI / FastAPI-based).

Discovery

  • GET /api/tools – list available tools
  • GET /api/servers – list available servers and capabilities

Execution

  • POST /api/runs/validate – validate tool + inputs + resources
  • POST /api/runs/submit – submit a run
  • GET /api/runs/{run_id} – get run status
  • GET /api/runs/{run_id}/results – fetch results when ready

Example: BLAST run lifecycle

1. Validate

POST /api/runs/validate
{
  "tool_id": "blastn",
  "inputs": {
    "sequence": ">q\nACGTACGT",
    "database": "ecoli_demo"
  },
  "resources": { "cpu": 2, "ram_gb": 2 }
}

2. Submit

POST /api/runs/submit
→ { "run_id": "run_abc123" }

3. Poll

GET /api/runs/run_abc123
→ state: RUNNING | COMPLETED | FAILED

4. Fetch results

GET /api/runs/run_abc123/results
→ structured JSON results

Demo Adapter (Current)

The default configuration includes a local demo adapter that returns mock BLAST results.

This is intentional:

  • It validates the entire execution pipeline
  • It allows frontend and agent development without heavy dependencies
  • It can be replaced with a real backend without changing the API

Different inputs currently return the same demo output. Replace the adapter to enable real BLAST execution.


Running the Service

Requirements

  • Python ≥ 3.11
  • pip or pipx

Install (editable)

pip install -e .

Run

omnibioai-tes serve \
  --host 127.0.0.1 \
  --port 8080 \
  --tools configs/tools.example.yaml \
  --servers configs/servers.example.yaml

API Docs

Open:

http://127.0.0.1:8080/docs

Configuration

Tools

Defined via YAML:

- tool_id: blastn
  display_name: BLASTN
  inputs_schema:
    required: [sequence, database]

Servers

Defined via YAML:

- server_id: local_demo
  adapter_type: local
  capabilities:
    tools:
      - tool_id: blastn
        features:
          databases: [ecoli_demo]

Integration with OmniBioAI

TES is designed to be used server-to-server.

Typical flow:

  1. User chats with OmniBioAI
  2. LLM identifies a tool intent (blast:)
  3. Django backend calls TES
  4. TES executes and returns results
  5. Results are rendered in chat UI

No browser-to-TES calls are required.


What TES is not

  • ❌ A workflow engine (Nextflow/WDL belong elsewhere)
  • ❌ A UI
  • ❌ An LLM executor
  • ❌ A database

TES is a secure execution boundary.


Roadmap

  • Real BLAST+ adapters (local / container / HPC)
  • OMIM / annotation tools
  • Async callbacks & long-running job tracking
  • Auth / multi-tenant policies
  • Artifact storage backends (S3, GCS, POSIX)

License

Apache 2.0 (or your chosen license)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnibioai_tool_exec-0.1.11.tar.gz (26.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnibioai_tool_exec-0.1.11-py3-none-any.whl (37.3 kB view details)

Uploaded Python 3

File details

Details for the file omnibioai_tool_exec-0.1.11.tar.gz.

File metadata

  • Download URL: omnibioai_tool_exec-0.1.11.tar.gz
  • Upload date:
  • Size: 26.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for omnibioai_tool_exec-0.1.11.tar.gz
Algorithm Hash digest
SHA256 85cb3352e2a0aef537dc9fbfebbbb317e44e6f96524c0934582664a0f64270d8
MD5 bef8d9701ab59d38963647c99abfde8f
BLAKE2b-256 fb68a4a030c91ed10326dabee474e104e5952672ef9cfc642a617f63e33b001f

See more details on using hashes here.

File details

Details for the file omnibioai_tool_exec-0.1.11-py3-none-any.whl.

File metadata

File hashes

Hashes for omnibioai_tool_exec-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 fe549a810da2e53102abaa5b390d564ae34b10fb1b297eb2f41288a5a3ed3029
MD5 2ce94cab9de4b753334927922457027b
BLAKE2b-256 f27b87d8a86d32c1a5c96c98df50b1e9c94fdcdc131c087dc991dbd9fc57913e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page