Skip to main content

OmniBioAI Tool Execution Service (TES): secure, validated execution of bioinformatics tools on local/HPC/remote backends

Project description

OmniBioAI Tool Execution Service (TES)

omnibioai-tool-exec is a standalone Tool Execution Service (TES) that enables secure, validated, and reproducible execution of bioinformatics tools across heterogeneous compute environments (local, HPC, cloud, or remote servers).

It is designed to be used by agentic AI systems (such as OmniBioAI) where an LLM interprets user intent, but never executes tools directly. Instead, all execution is delegated to TES via a strict API contract.


Why TES exists

LLMs are good at understanding intent, but bad at safely running tools.

TES solves this by acting as a control plane between AI agents and real compute:

  • Validates tool inputs and resource requests
  • Matches tools to compatible execution environments
  • Executes tools in a controlled adapter layer
  • Tracks run lifecycle (submitted → running → completed/failed)
  • Returns structured, machine-readable results

This prevents:

  • Arbitrary command execution
  • Hallucinated tool outputs
  • Tight coupling between UI, AI, and infrastructure

Core Concepts

Tool

A tool is a declarative definition of:

  • Inputs schema (JSON Schema)
  • Outputs schema
  • Requirements (e.g. BLAST database, reference genome)

Example:

tool_id: blastn
inputs_schema:
  required: [sequence, database]

Server

A server represents an execution environment:

  • Local machine
  • HPC cluster
  • Kubernetes
  • Remote API-backed service

Each server advertises:

  • Which tools it supports
  • Available resources
  • Storage constraints
  • Runtime policies

Adapter

An adapter translates a validated run request into a real execution:

  • Local process
  • Container
  • Job scheduler
  • HTTP call

Adapters are pluggable and isolated from the API layer.


Architecture Overview

User / Chat UI
      |
      v
LLM / Agent (intent only)
      |
      v
omnibioai-tool-exec (TES)
  - validate
  - route
  - execute
      |
      v
Execution Environment
(local / HPC / cloud)

Key rule: LLMs never execute tools. TES does.


API Overview

TES exposes a REST API (OpenAPI / FastAPI-based).

Discovery

  • GET /api/tools – list available tools
  • GET /api/servers – list available servers and capabilities

Execution

  • POST /api/runs/validate – validate tool + inputs + resources
  • POST /api/runs/submit – submit a run
  • GET /api/runs/{run_id} – get run status
  • GET /api/runs/{run_id}/results – fetch results when ready

Example: BLAST run lifecycle

1. Validate

POST /api/runs/validate
{
  "tool_id": "blastn",
  "inputs": {
    "sequence": ">q\nACGTACGT",
    "database": "ecoli_demo"
  },
  "resources": { "cpu": 2, "ram_gb": 2 }
}

2. Submit

POST /api/runs/submit
→ { "run_id": "run_abc123" }

3. Poll

GET /api/runs/run_abc123
→ state: RUNNING | COMPLETED | FAILED

4. Fetch results

GET /api/runs/run_abc123/results
→ structured JSON results

Demo Adapter (Current)

The default configuration includes a local demo adapter that returns mock BLAST results.

This is intentional:

  • It validates the entire execution pipeline
  • It allows frontend and agent development without heavy dependencies
  • It can be replaced with a real backend without changing the API

Different inputs currently return the same demo output. Replace the adapter to enable real BLAST execution.


Running the Service

Requirements

  • Python ≥ 3.11
  • pip or pipx

Install (editable)

pip install -e .

Run

omnibioai-tes serve \
  --host 127.0.0.1 \
  --port 8080 \
  --tools configs/tools.example.yaml \
  --servers configs/servers.example.yaml

API Docs

Open:

http://127.0.0.1:8080/docs

Configuration

Tools

Defined via YAML:

- tool_id: blastn
  display_name: BLASTN
  inputs_schema:
    required: [sequence, database]

Servers

Defined via YAML:

- server_id: local_demo
  adapter_type: local
  capabilities:
    tools:
      - tool_id: blastn
        features:
          databases: [ecoli_demo]

Integration with OmniBioAI

TES is designed to be used server-to-server.

Typical flow:

  1. User chats with OmniBioAI
  2. LLM identifies a tool intent (blast:)
  3. Django backend calls TES
  4. TES executes and returns results
  5. Results are rendered in chat UI

No browser-to-TES calls are required.


What TES is not

  • ❌ A workflow engine (Nextflow/WDL belong elsewhere)
  • ❌ A UI
  • ❌ An LLM executor
  • ❌ A database

TES is a secure execution boundary.


Roadmap

  • Real BLAST+ adapters (local / container / HPC)
  • OMIM / annotation tools
  • Async callbacks & long-running job tracking
  • Auth / multi-tenant policies
  • Artifact storage backends (S3, GCS, POSIX)

License

Apache 2.0 (or your chosen license)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnibioai_tool_exec-0.1.0.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnibioai_tool_exec-0.1.0-py3-none-any.whl (30.8 kB view details)

Uploaded Python 3

File details

Details for the file omnibioai_tool_exec-0.1.0.tar.gz.

File metadata

  • Download URL: omnibioai_tool_exec-0.1.0.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for omnibioai_tool_exec-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5f6b5abab541c3523a9d3604f70feeac50b47a94e5b5f6c040002ddbe9b7b68f
MD5 8c7e67c536c6d6ae51c1720902882d5b
BLAKE2b-256 8d1443508ee1cf79e5817d793d194919b3e5a073123f9a256b23beb110d122ea

See more details on using hashes here.

File details

Details for the file omnibioai_tool_exec-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for omnibioai_tool_exec-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 99f9be07d3e663013f2eb8454ab4bc5c0acca2ae2f158aa4715329236fb10e24
MD5 e4f4fec48941bfa93ade8b0388de179f
BLAKE2b-256 49d98e3b31f27713fa9d6584d52967cf63f8ec236746a0c892264e5a9fcbf62a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page