Skip to main content

Open-core orchestrator logic for the Siglume API Store agent runtime — tool-manual quality scoring + LLM provider tool adapters + installed-tool prefilter.

Project description

siglume-agent-core

Open-core orchestrator logic for the Siglume API Store agent runtime.

This is the public, AGPL-licensed core of the algorithms the Siglume marketplace uses to:

  • Score the quality of a publisher's tool manual (tool_manual_validator)
  • Build LLM provider tool definitions in Anthropic / OpenAI tool-use format (provider_adapters)

It is the same code that runs in production — extracted from the private monorepo so publishers, contributors, and self-hosters can read, audit, and improve it.

Status: Phase 1 of a staged extraction. Currently exposes Tier A modules (manual quality scoring + provider adapters). Selection scoring (installed_tool_resolver), simulator (dev_simulator), and analytics derivations (seller_analytics) follow in subsequent releases. See ARCHITECTURE.md for the roadmap.


Why this exists

The Siglume marketplace agent had been a black box from the publisher side. When a publisher asked "why didn't my API get picked?" or "why is my manual graded B?", the only way to answer was through platform-side reports.

This repository is the direct answer: read the source, run the same scorer locally, contribute improvements as PRs.

The platform itself remains a hosted service (publishers, buyers, payments, identity, deployment infrastructure all stay private). Only the decision logic — how the agent picks tools, how manuals are scored, how provider tool calls are formatted — is open.

Install

pip install siglume-agent-core           # core only
pip install siglume-agent-core[anthropic] # + Anthropic adapter
pip install siglume-agent-core[openai]    # + OpenAI adapter
pip install siglume-agent-core[dev]       # + test/lint deps

What's in this release (v0.2, Tier A + Tier B Phase 1)

siglume_agent_core.tool_manual_validator

The exact same validator Siglume runs to grade publisher-submitted tool manuals (A / B / C / D / F). Use it locally to predict your manual's grade before submission:

from siglume_agent_core.tool_manual_validator import validate_tool_manual, score_manual_quality

manual = {...}  # your tool manual dict

result = validate_tool_manual(manual)
if not result.ok:
    for err in result.errors:
        print(err.code, err.message, err.field)

quality = score_manual_quality(manual)
print(f"Grade {quality.grade} ({quality.overall_score}/100)")
# Platform accepts grade A and B at publish time; C/D/F are rejected.
if quality.grade in ("A", "B"):
    print("Likely publishable — submit when ready.")
else:
    print("Improve before submitting:")
    for s in quality.improvement_suggestions[:3]:
        print(f"  - {s}")

This is byte-equivalent to the server-side scorer — verified by CI parity test against the Siglume monorepo. If your local grade is B, the server grade is B.

siglume_agent_core.installed_tool_prefilter

TF-IDF + cosine similarity scorer that picks the top-N most-relevant tools when an agent has many bound, so the chat system prompt stays within the input token budget. Pure-Python, no embedding service. Same code the platform runs in production:

from siglume_agent_core.installed_tool_prefilter import select_top_tools_for_prompt
from siglume_agent_core.types import ResolvedToolDefinition

# tools is whatever your code resolved from a binding registry.
top = select_top_tools_for_prompt(tools, user_message="translate this to japanese", max_tools=50)
# `top` is a subset of `tools`, ranked by JTBD relevance, original order preserved.

siglume_agent_core.provider_adapters

Provider-specific adapters that convert an internal tool definition + message thread into the format Anthropic's or OpenAI's tool-use API expects, and parse the response back into a uniform shape.

The provider SDKs are optional extras — install only the ones you use:

pip install siglume-agent-core[anthropic]   # + Anthropic SDK
pip install siglume-agent-core[openai]      # + OpenAI SDK

Without the matching extra, importing the adapter raises a clear ImportError telling you which extra to install. Then:

from siglume_agent_core.provider_adapters.anthropic_tools import AnthropicToolAdapter
from siglume_agent_core.provider_adapters.types import ToolMessage

adapter = AnthropicToolAdapter()
turn = adapter.run_turn(
    model="claude-haiku-4-5-20251001",
    messages=[ToolMessage(role="user", content="...")],
    tools=[...],
    max_output_tokens=2048,
    tool_choice="auto",  # "auto" | "any" | "none"
)
print(turn.tool_calls)  # what the LLM picked

tool_choice="none" means no tool use this turn — the adapter elides the tools array entirely, matching the contract you'd expect from OpenAI. Use the same adapter the platform uses, so you can prototype tool-use applications against either provider with consistent behavior.

What's not in this repo

The following stays in the private platform monorepo because exposing them creates security or business risk:

  • Authentication / OAuth credential leasing (connected_account_broker)
  • Payment processing & wallet signing
  • Production database schema & data
  • Per-buyer KYC / AML decisioning
  • Marketplace pricing & fee logic
  • The execution gateway (capability_gateway) — security/policy boundary

See ARCHITECTURE.md for what's planned to come next vs. what stays private.

License

AGPL-3.0-only.

If you self-host the orchestrator, the AGPL terms apply: changes you make to this code that you operate as a network service must be made available under AGPL-3.0 to your users. Commercial licensing for proprietary deployment is available — contact siglume@energy-connect.co.jp.

Contributing

We accept PRs. See CONTRIBUTING.md. The most useful contribution paths today:

  • Improve tool_manual_validator heuristics — many graders are currently keyword-rule based; ML-driven or more nuanced scoring is welcome
  • Add edge-case tests — anything you've seen the platform mishandle
  • Add new provider adapters — Gemini, Mistral, local models

Tracking issue for the broader publisher-dev-tools initiative: siglume-api-sdk#195.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

siglume_agent_core-0.2.3.tar.gz (45.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

siglume_agent_core-0.2.3-py3-none-any.whl (40.3 kB view details)

Uploaded Python 3

File details

Details for the file siglume_agent_core-0.2.3.tar.gz.

File metadata

  • Download URL: siglume_agent_core-0.2.3.tar.gz
  • Upload date:
  • Size: 45.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for siglume_agent_core-0.2.3.tar.gz
Algorithm Hash digest
SHA256 6d2c30b7271f17a8ab7f284a13e3d1d2bde410241454e8163642a240ee1fc040
MD5 4b184de85b6063b14ea1a123cc279169
BLAKE2b-256 32549016dba7d7a133f972b5574a948e3546618bb690e1960a764b17ee3610fa

See more details on using hashes here.

Provenance

The following attestation bundles were made for siglume_agent_core-0.2.3.tar.gz:

Publisher: release.yml on taihei-05/siglume-agent-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file siglume_agent_core-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for siglume_agent_core-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 02fc7117305a900bf4f4c7480d94902a43cab1c23a009b479878ac0031c2f559
MD5 324f9893c381fdd8f296ad9ad08d1af7
BLAKE2b-256 e9513284cf8fd4f46204a232bdce74af3873f50494c5a69ef457138634d00d2d

See more details on using hashes here.

Provenance

The following attestation bundles were made for siglume_agent_core-0.2.3-py3-none-any.whl:

Publisher: release.yml on taihei-05/siglume-agent-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page