Skip to main content

OmniBioAI Tool Execution Service (TES): secure, validated execution of bioinformatics tools on local/HPC/remote backends

Project description

OmniBioAI Tool Execution Service (TES)

omnibioai-tool-exec is a standalone Tool Execution Service (TES) that enables secure, validated, and reproducible execution of bioinformatics tools across heterogeneous compute environments (local, HPC, cloud, or remote servers).

It is designed to be used by agentic AI systems (such as OmniBioAI) where an LLM interprets user intent, but never executes tools directly. Instead, all execution is delegated to TES via a strict API contract.


Why TES exists

LLMs are good at understanding intent, but bad at safely running tools.

TES solves this by acting as a control plane between AI agents and real compute:

  • Validates tool inputs and resource requests
  • Matches tools to compatible execution environments
  • Executes tools in a controlled adapter layer
  • Tracks run lifecycle (submitted → running → completed/failed)
  • Returns structured, machine-readable results

This prevents:

  • Arbitrary command execution
  • Hallucinated tool outputs
  • Tight coupling between UI, AI, and infrastructure

Core Concepts

Tool

A tool is a declarative definition of:

  • Inputs schema (JSON Schema)
  • Outputs schema
  • Requirements (e.g. BLAST database, reference genome)

Example:

tool_id: blastn
inputs_schema:
  required: [sequence, database]

Server

A server represents an execution environment:

  • Local machine
  • HPC cluster
  • Kubernetes
  • Remote API-backed service

Each server advertises:

  • Which tools it supports
  • Available resources
  • Storage constraints
  • Runtime policies

Adapter

An adapter translates a validated run request into a real execution:

  • Local process
  • Container
  • Job scheduler
  • HTTP call

Adapters are pluggable and isolated from the API layer.


Architecture Overview

User / Chat UI
      |
      v
LLM / Agent (intent only)
      |
      v
omnibioai-tool-exec (TES)
  - validate
  - route
  - execute
      |
      v
Execution Environment
(local / HPC / cloud)

Key rule: LLMs never execute tools. TES does.


API Overview

TES exposes a REST API (OpenAPI / FastAPI-based).

Discovery

  • GET /api/tools – list available tools
  • GET /api/servers – list available servers and capabilities

Execution

  • POST /api/runs/validate – validate tool + inputs + resources
  • POST /api/runs/submit – submit a run
  • GET /api/runs/{run_id} – get run status
  • GET /api/runs/{run_id}/results – fetch results when ready

Example: BLAST run lifecycle

1. Validate

POST /api/runs/validate
{
  "tool_id": "blastn",
  "inputs": {
    "sequence": ">q\nACGTACGT",
    "database": "ecoli_demo"
  },
  "resources": { "cpu": 2, "ram_gb": 2 }
}

2. Submit

POST /api/runs/submit
→ { "run_id": "run_abc123" }

3. Poll

GET /api/runs/run_abc123
→ state: RUNNING | COMPLETED | FAILED

4. Fetch results

GET /api/runs/run_abc123/results
→ structured JSON results

Demo Adapter (Current)

The default configuration includes a local demo adapter that returns mock BLAST results.

This is intentional:

  • It validates the entire execution pipeline
  • It allows frontend and agent development without heavy dependencies
  • It can be replaced with a real backend without changing the API

Different inputs currently return the same demo output. Replace the adapter to enable real BLAST execution.


Running the Service

Requirements

  • Python ≥ 3.11
  • pip or pipx

Install (editable)

pip install -e .

Run

omnibioai-tes serve \
  --host 127.0.0.1 \
  --port 8080 \
  --tools configs/tools.example.yaml \
  --servers configs/servers.example.yaml

API Docs

Open:

http://127.0.0.1:8080/docs

Configuration

Tools

Defined via YAML:

- tool_id: blastn
  display_name: BLASTN
  inputs_schema:
    required: [sequence, database]

Servers

Defined via YAML:

- server_id: local_demo
  adapter_type: local
  capabilities:
    tools:
      - tool_id: blastn
        features:
          databases: [ecoli_demo]

Integration with OmniBioAI

TES is designed to be used server-to-server.

Typical flow:

  1. User chats with OmniBioAI
  2. LLM identifies a tool intent (blast:)
  3. Django backend calls TES
  4. TES executes and returns results
  5. Results are rendered in chat UI

No browser-to-TES calls are required.


What TES is not

  • ❌ A workflow engine (Nextflow/WDL belong elsewhere)
  • ❌ A UI
  • ❌ An LLM executor
  • ❌ A database

TES is a secure execution boundary.


Roadmap

  • Real BLAST+ adapters (local / container / HPC)
  • OMIM / annotation tools
  • Async callbacks & long-running job tracking
  • Auth / multi-tenant policies
  • Artifact storage backends (S3, GCS, POSIX)

License

Apache 2.0 (or your chosen license)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnibioai_tool_exec-0.1.14.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnibioai_tool_exec-0.1.14-py3-none-any.whl (40.5 kB view details)

Uploaded Python 3

File details

Details for the file omnibioai_tool_exec-0.1.14.tar.gz.

File metadata

  • Download URL: omnibioai_tool_exec-0.1.14.tar.gz
  • Upload date:
  • Size: 29.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for omnibioai_tool_exec-0.1.14.tar.gz
Algorithm Hash digest
SHA256 3b000cc438e1968619e79e06656054d671caaa1d84f4ee1b98f4938506aa2381
MD5 242a67be397ceab2ce497c4dc05cb7ca
BLAKE2b-256 d07dab92f13f73a1c5c07b3bc4ce727e38afca5bc84230bf17c415f849b8f094

See more details on using hashes here.

File details

Details for the file omnibioai_tool_exec-0.1.14-py3-none-any.whl.

File metadata

File hashes

Hashes for omnibioai_tool_exec-0.1.14-py3-none-any.whl
Algorithm Hash digest
SHA256 36d7f633d75f701eca89fd52b6097b0647559f08d1c10fbb441ad016225238bf
MD5 4bee776748dbf06340367ee86fd0cfea
BLAKE2b-256 2e93c601dc5c0fde48811bcbc6eff64ac11b04a0a68444d030d0da0841d5500e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page