Skip to main content

LangGraph-based agent workflows for LHCb Analysis Productions

Project description

LbAgents

LangGraph-based agent workflows plugged into LbAPI.

This package contains pure agent logic — graph definitions, prompt templates, state schemas, tools, and a Pydantic workflow registry. It has no infrastructure dependencies (no database, no HTTP server, no Celery). MCP tools are discovered at runtime and passed to graph factories by the host application (LbAPI).

Architecture

graph LR
    subgraph LbAgents ["LbAgents (this package)"]
        Registry[Workflow Registry]
        Factories[Graph Factories]
        Schemas[State Schemas]
        Prompts[Prompt Templates]
        Tools[GitLab Tools]
    end

    subgraph LbAPI ["LbAPI (host)"]
        Router["/agent/* router"]
        Celery[Celery tasks]
        MCP[MCP tool discovery]
        LLM[KServe LLM clients]
        PG[AsyncPostgresSaver]
        LF[Langfuse observability]
    end

    Router -->|imports| Registry
    Celery -->|builds graph from| Factories
    MCP -->|passes tools to| Factories
    LLM -->|passes LLMs to| Factories
    PG -->|compiles with| Factories

Key design decisions

  • Factory pattern: Graph factories return an uncompiled StateGraph. The caller (LbAPI) compiles it with a checkpointer.
  • Tools as parameters: Factories accept a tools list — no hardcoded tool definitions. LbAPI discovers MCP tools at runtime and combines them with GitLab tools from this package.
  • Self-registration: Importing LbAgents populates WORKFLOW_REGISTRY with all available workflows. Adding a new graph to this package makes it available in LbAPI with zero LbAPI code changes.
  • Write-tool interrupts: Tools that perform write operations (posting reviews, creating issues) are gated by human-in-the-loop interrupts. The pending tool call is shown to the reviewer before execution.

Workflows

Name Description Trigger Delivery Human Review
ap_debug_ci_failure Single-agent CI failure diagnosis and MR review Internal GitLab MR Before posting review
ap_debug_ci_failure_multiagent Multi-agent CI failure diagnosis with parallel investigation Internal GitLab MR Before posting review
ap_debug_stuck_productions Diagnose stuck productions (MaxReset, stalled files) Internal GitLab Issue Before write actions
mattermost_debug_user_request Answer user support questions via Mattermost Webhook Mattermost Always (admin verifies)
nightly_failure_summary Summarize nightly build test failures Manual No

ap_debug_ci_failure

Single ReAct loop with one LLM (orchestrator). The agent investigates using pipeline MCP tools, reads the MR diff, and posts a review. Write operations require human approval.

graph TD
    A[agent] -->|tool calls| T[tools]
    T --> A
    A -->|no tool calls,\npost_mr_review not called| N[nudge]
    N --> A
    A -->|write tool call| HR[human_review\n— interrupt —]
    HR -->|approved| T
    HR -->|rejected + feedback| A
    HR -->|rejected| END
    A -->|no tool calls| END

ap_debug_ci_failure_multiagent

Multi-agent pipeline: triage identifies failures, parallel investigators diagnose each cluster, a synthesizer drafts the review, an optional code reviewer adds inline comments, and the review is posted after human approval.

graph TD
    TA[triage_agent] -->|tool calls| TT[triage_tools]
    TT --> TA
    TA -->|done| CJ[collect_job_info]
    CJ --> CL[cluster_jobs]

    CL -->|"Send() per cluster\n(parallel)"| IJ[investigate_job]
    CL -->|no clusters| SA

    IJ -->|optional| RF[review_findings\n— interrupt —]
    IJ --> SA[synthesize_agent]
    RF --> SA

    SA -->|tool calls| ST[synthesize_tools]
    ST --> SA
    SA -->|done| SRB[save_review_body]

    SRB -->|"MR + code review tools"| RC["review_code\n(Send)"]
    SRB -->|no code review| PPR
    SRB -->|no MR| END

    RC --> MCR[merge_code_review]
    MCR --> PPR[prepare_post_review]

    PPR --> HR[human_review\n— interrupt —]
    HR -->|approved| EPR[execute_post_review]
    HR -->|rejected + feedback| SA
    HR -->|rejected| END
    EPR --> END

    style HR fill:#f9f,stroke:#333
    style RF fill:#f9f,stroke:#333

ap_debug_stuck_productions

Single ReAct agent investigating a stuck production sample. Uses production MCP tools (sample details, MaxReset files, grid job logs). Write actions (adding known failures, creating GitLab issues) require human approval.

graph TD
    A[agent] -->|tool calls| T[tools]
    T --> A
    A -->|write tool call| HR[human_review\n— interrupt —]
    HR -->|approved| T
    HR -->|rejected + feedback| A
    HR -->|rejected| END
    A -->|no tool calls| END

mattermost_debug_user_request

Support bot triggered by admin in a Mattermost thread. Investigates using all available tools, drafts a response, then always pauses for admin verification before posting.

graph TD
    A[agent] -->|tool calls| T[tools]
    T --> A
    A -->|done| HV[human_verify\n— interrupt —]
    HV --> END

nightly_failure_summary

Simple ReAct agent that queries nightly build test results and produces a structured summary. Read-only — no interrupts.

graph TD
    A[agent] -->|tool calls| T[tools]
    T --> A
    A -->|done| END

Tools

GitLab tools (LbAgents.tools.gitlab)

Factory function make_gitlab_review_tools(gitlab_token, mr_iid) returns LangChain tools scoped to a specific merge request:

  • get_mr_info() — MR metadata (title, author, branches, labels)
  • get_mr_diff() — File-level diffs
  • get_mr_file_content(file_path) — Full file from source branch
  • post_mr_review(body, inline_comments) — Post review with inline comments
  • get_mr_commits() — Commit list for the MR
  • get_commit_diff(sha) — Diff for a specific commit
  • get_target_branch_file(file_path) — File from the target branch

Nightly tools (LbAgents.tools.nightly)

Factory function make_nightly_tools(bucket_url) returns tools for querying nightly build results:

  • get_nightly_status_summary(slot, build_id) — Overall build status
  • list_nightly_platforms(slot, build_id) — Available platforms
  • list_nightly_projects(slot, build_id, platform) — Projects for a platform
  • get_failed_tests(slot, build_id, platform, project) — Failed test list
  • get_test_output(slot, build_id, platform, project, test) — Test output
  • get_test_measurement(slot, build_id, platform, project, test) — Performance data

Usage

from LbAgents import WORKFLOW_REGISTRY
from LbAgents.tools.gitlab import make_gitlab_review_tools

# All workflows are registered at import time
for name, spec in WORKFLOW_REGISTRY.items():
    print(f"{name}: {spec.description}")

# Build a graph with MCP tools + GitLab tools
from LbAgents.graphs import build_ap_debug_ci_failure_multiagent_graph

gitlab_tools = make_gitlab_review_tools(gitlab_token="...", mr_iid=42)
all_tools = mcp_tools + gitlab_tools

graph = build_ap_debug_ci_failure_multiagent_graph(
    orchestrator_llm=orchestrator,
    tools=all_tools,
    write_tool_names=["post_mr_review"],
    coder_llm=coder,  # optional, for code trace + review
)
compiled = graph.compile(checkpointer=my_checkpointer)

Adding a new workflow

  1. Create a new module in src/LbAgents/graphs/ with a build_*_graph() factory function.
  2. Add a state schema in src/LbAgents/schemas/state.py if needed.
  3. Add a prompt template in src/LbAgents/prompts/templates.py.
  4. Register the workflow in src/LbAgents/graphs/__init__.py via register_workflow().
  5. Add tests in tests/test_graphs.py.

The workflow will automatically appear in LbAPI's GET /agent/workflows endpoint.

Development

pixi install
pixi run pytest        # Run tests
pixi run ruff check    # Lint
pixi run pre-commit

License

GPL-3.0 — see COPYING.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lbagents-0.18.11.tar.gz (172.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lbagents-0.18.11-py3-none-any.whl (89.2 kB view details)

Uploaded Python 3

File details

Details for the file lbagents-0.18.11.tar.gz.

File metadata

  • Download URL: lbagents-0.18.11.tar.gz
  • Upload date:
  • Size: 172.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for lbagents-0.18.11.tar.gz
Algorithm Hash digest
SHA256 f608713d19d5cf227a583da6d2704b613c5b19f47a76ff430111f6a7eb57bcbc
MD5 774dd65455742a509b5a2c81be37376c
BLAKE2b-256 032155e1fe1ef07700d791bb30133aab89d6bcc4a719dffece97bf37f804e098

See more details on using hashes here.

File details

Details for the file lbagents-0.18.11-py3-none-any.whl.

File metadata

  • Download URL: lbagents-0.18.11-py3-none-any.whl
  • Upload date:
  • Size: 89.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for lbagents-0.18.11-py3-none-any.whl
Algorithm Hash digest
SHA256 38bc37866a5b353ff80dd8db5ef5757be7e071d102ab71b4c8639500d0b2ffac
MD5 ed759f514e68d531c157b963d2a7dcf4
BLAKE2b-256 714560834ea0d3f9bd568bd085c0abbd526e260c4c3b5d2361cf193f47d8cf1d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page