Skip to main content

Open-core orchestrator logic for the Siglume API Store agent runtime — tool-manual quality scoring + LLM provider tool adapters + installed-tool prefilter.

Project description

siglume-agent-core

Open-core orchestrator logic for the Siglume API Store agent runtime.

This is the public, AGPL-licensed core of the algorithms the Siglume marketplace uses to:

  • Score the quality of a publisher's tool manual (tool_manual_validator)
  • Build LLM provider tool definitions in Anthropic / OpenAI tool-use format (provider_adapters)

It is the same code that runs in production — extracted from the private monorepo so publishers, contributors, and self-hosters can read, audit, and improve it.

Status: Phase 1 of a staged extraction. Currently exposes Tier A modules (manual quality scoring + provider adapters). Selection scoring (installed_tool_resolver), simulator (dev_simulator), and analytics derivations (seller_analytics) follow in subsequent releases. See ARCHITECTURE.md for the roadmap.


Why this exists

The Siglume marketplace agent had been a black box from the publisher side. When a publisher asked "why didn't my API get picked?" or "why is my manual graded B?", the only way to answer was through platform-side reports.

This repository is the direct answer: read the source, run the same scorer locally, contribute improvements as PRs.

The platform itself remains a hosted service (publishers, buyers, payments, identity, deployment infrastructure all stay private). Only the decision logic — how the agent picks tools, how manuals are scored, how provider tool calls are formatted — is open.

Install

pip install siglume-agent-core           # core only
pip install siglume-agent-core[anthropic] # + Anthropic adapter
pip install siglume-agent-core[openai]    # + OpenAI adapter
pip install siglume-agent-core[dev]       # + test/lint deps

What's in this release (v0.2, Tier A + Tier B Phase 1)

siglume_agent_core.tool_manual_validator

The exact same validator Siglume runs to grade publisher-submitted tool manuals (A / B / C / D / F). Use it locally to predict your manual's grade before submission:

from siglume_agent_core.tool_manual_validator import validate_tool_manual, score_manual_quality

manual = {...}  # your tool manual dict

result = validate_tool_manual(manual)
if not result.ok:
    for err in result.errors:
        print(err.code, err.message, err.field)

quality = score_manual_quality(manual)
print(f"Grade {quality.grade} ({quality.overall_score}/100)")
# Platform accepts grade A and B at publish time; C/D/F are rejected.
if quality.grade in ("A", "B"):
    print("Likely publishable — submit when ready.")
else:
    print("Improve before submitting:")
    for s in quality.improvement_suggestions[:3]:
        print(f"  - {s}")

This is byte-equivalent to the server-side scorer — verified by CI parity test against the Siglume monorepo. If your local grade is B, the server grade is B.

siglume_agent_core.installed_tool_prefilter

TF-IDF + cosine similarity scorer that picks the top-N most-relevant tools when an agent has many bound, so the chat system prompt stays within the input token budget. Pure-Python, no embedding service. Same code the platform runs in production:

from siglume_agent_core.installed_tool_prefilter import select_top_tools_for_prompt
from siglume_agent_core.types import ResolvedToolDefinition

# tools is whatever your code resolved from a binding registry.
top = select_top_tools_for_prompt(tools, user_message="translate this to japanese", max_tools=50)
# `top` is a subset of `tools`, ranked by JTBD relevance, original order preserved.

siglume_agent_core.provider_adapters

Provider-specific adapters that convert an internal tool definition + message thread into the format Anthropic's or OpenAI's tool-use API expects, and parse the response back into a uniform shape.

The provider SDKs are optional extras — install only the ones you use:

pip install siglume-agent-core[anthropic]   # + Anthropic SDK
pip install siglume-agent-core[openai]      # + OpenAI SDK

Without the matching extra, importing the adapter raises a clear ImportError telling you which extra to install. Then:

from siglume_agent_core.provider_adapters.anthropic_tools import AnthropicToolAdapter
from siglume_agent_core.provider_adapters.types import ToolMessage

adapter = AnthropicToolAdapter()
turn = adapter.run_turn(
    model="claude-haiku-4-5-20251001",
    messages=[ToolMessage(role="user", content="...")],
    tools=[...],
    max_output_tokens=2048,
    tool_choice="auto",  # "auto" | "any" | "none"
)
print(turn.tool_calls)  # what the LLM picked

tool_choice="none" means no tool use this turn — the adapter elides the tools array entirely, matching the contract you'd expect from OpenAI. Use the same adapter the platform uses, so you can prototype tool-use applications against either provider with consistent behavior.

What's not in this repo

The following stays in the private platform monorepo because exposing them creates security or business risk:

  • Authentication / OAuth credential leasing (connected_account_broker)
  • Payment processing & wallet signing
  • Production database schema & data
  • Per-buyer KYC / AML decisioning
  • Marketplace pricing & fee logic
  • The execution gateway (capability_gateway) — security/policy boundary

See ARCHITECTURE.md for what's planned to come next vs. what stays private.

License

AGPL-3.0-only.

If you self-host the orchestrator, the AGPL terms apply: changes you make to this code that you operate as a network service must be made available under AGPL-3.0 to your users. Commercial licensing for proprietary deployment is available — contact siglume@energy-connect.co.jp.

Contributing

We accept PRs. See CONTRIBUTING.md. The most useful contribution paths today:

  • Improve tool_manual_validator heuristics — many graders are currently keyword-rule based; ML-driven or more nuanced scoring is welcome
  • Add edge-case tests — anything you've seen the platform mishandle
  • Add new provider adapters — Gemini, Mistral, local models

Tracking issue for the broader publisher-dev-tools initiative: siglume-api-sdk#195.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

siglume_agent_core-0.2.2.tar.gz (42.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

siglume_agent_core-0.2.2-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file siglume_agent_core-0.2.2.tar.gz.

File metadata

  • Download URL: siglume_agent_core-0.2.2.tar.gz
  • Upload date:
  • Size: 42.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for siglume_agent_core-0.2.2.tar.gz
Algorithm Hash digest
SHA256 c9ec100894c449c962f71d414d763a85035faae6ac4c6ff47eafec4bd4702d45
MD5 fcb41efff87c091cb030d8fbbc551f1e
BLAKE2b-256 15e4d5f64546b6e457355b3aeb20441ae16a304afce2b6f0bd824b5a42636815

See more details on using hashes here.

Provenance

The following attestation bundles were made for siglume_agent_core-0.2.2.tar.gz:

Publisher: release.yml on taihei-05/siglume-agent-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file siglume_agent_core-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for siglume_agent_core-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a85071a91d1eaf1fd3d569a5f402effacf046013a31000e7f1166c5d3cc88576
MD5 1cf2946d94aa076b36160856a98d11f4
BLAKE2b-256 9f68452a2d877367da6e1b49fb97d304691a08b44971ab7cbe1ee54a49470dbe

See more details on using hashes here.

Provenance

The following attestation bundles were made for siglume_agent_core-0.2.2-py3-none-any.whl:

Publisher: release.yml on taihei-05/siglume-agent-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page