Skip to main content

Deterministic package-manager-style resolver for AI skills

Project description

Aptitude Resolver

Python uv Pydantic pytest Ruff DeepWiki Last Commit

Aptitude is a deterministic, package-manager-style resolver for AI skills.

The system is intentionally split in two:

  • Aptitude Server owns registry data, metadata, immutable artifacts, and discovery indexes
  • Aptitude owns intent interpretation, candidate selection, dependency resolution, governance, lock generation, and execution planning

Current CLI

Primary commands:

  • aptitude install "<query>"
  • aptitude policy show
  • aptitude sync --lock aptitude.lock.json
  • aptitude manifest
  • aptitude-mcp

Internal preview command:

  • aptitude resolve "<query>"

Running aptitude with no arguments launches the install-first wizard. install and sync stay as the promoted task commands, policy show exposes the effective local client policy and config layers, and manifest exposes the complete command and flag surface. resolve still exists for preview, debugging, and CI, but it is hidden from normal CLI help.

aptitude-mcp starts the local stdio MCP server for agent hosts.

How To Install

Install the resolver and its development dependencies with uv:

uv sync --extra dev

This creates the local environment from pyproject.toml and makes the published CLI available through uv run or an activated environment.

Packaging And Publishing

This project builds and publishes as a normal Python package. uv is the build and publish tool, and the release registry is PyPI. There is no separate special "uv registry" format.

The packaging metadata lives in pyproject.toml:

  • [project] defines the package name, version, dependencies, and console entry point
  • [project.scripts] exposes aptitude-resolver and aptitude, both mapped to aptitude_resolver.interfaces.cli.main:main, plus aptitude-mcp mapped to aptitude_resolver.interfaces.mcp.main:main
  • [build-system] tells uv to build the package with uv_build

Build the package artifacts locally:

make build

make build runs uv build --no-sources and creates:

dist/*.whl
dist/*.tar.gz

The wheel is the main installable artifact. It contains the aptitude_resolver package, its dependency metadata, and both console scripts.

For a local manual publish with a PyPI API token:

export PYPI_API_TOKEN=your-pypi-token
make build-publish

make build-publish:

  • requires PYPI_API_TOKEN
  • builds fresh artifacts into .build-publish-dist/
  • publishes with uv publish
  • defaults to the production PyPI upload endpoint

To rehearse the local flow against TestPyPI instead of production PyPI:

export PYPI_API_TOKEN=your-testpypi-token
make build-publish REPOSITORY=testpypi

For the normal release path, publish to PyPI through GitHub Actions trusted publishing:

uv version --bump patch
git tag v$(uv version --short)
git push origin v$(uv version --short)

The release workflow lives at .github/workflows/publish.yml and:

  • triggers on tags matching v*
  • builds the wheel and sdist with uv build --no-sources
  • publishes with pypa/gh-action-pypi-publish
  • authenticates to PyPI with GitHub OIDC trusted publishing
  • does not use PyPI API tokens or repository secrets for the CI release path

The publish job uses the GitHub Environment pypi. That is not required by PyPI itself, but it is recommended because it gives releases a dedicated protection boundary in GitHub.

Install and run after publishing:

uv tool install aptitude-resolver
aptitude --help

For one-off execution without a persistent install:

uvx aptitude-resolver --help

Use this mental model:

  • make build builds the distributable artifacts
  • make build-publish performs a local token-based publish to PyPI or TestPyPI
  • pushing a v* tag triggers the trusted publishing workflow
  • uv tool install aptitude-resolver installs the published package
  • uvx aptitude-resolver ... runs the published package ephemerally
  • aptitude ... is the command end users run after installation

How To Use

For repo-local development, typical usage starts with one of these commands:

PYTHONPATH=src .venv/bin/python -m aptitude_resolver
PYTHONPATH=src .venv/bin/python -m aptitude_resolver --help
PYTHONPATH=src .venv/bin/python -m aptitude_resolver install "Postman Primary Skill"
PYTHONPATH=src .venv/bin/python -m aptitude_resolver policy show
PYTHONPATH=src .venv/bin/python -m aptitude_resolver sync --lock aptitude.lock.json
PYTHONPATH=src .venv/bin/python -m aptitude_resolver manifest
uv run aptitude-mcp

The no-args entrypoint launches the install-first wizard. Use install for fresh planning from a query, policy show to inspect the effective local client policy and config layers, sync --lock for replaying an existing lockfile, and manifest for the full capability map. For development, python -m aptitude_resolver is the canonical module entrypoint.

For published usage, prefer the installed CLI:

aptitude --help
aptitude install "Postman Primary Skill"
aptitude policy show
aptitude sync --lock aptitude.lock.json
aptitude manifest

For one-off published usage without installation:

uvx aptitude-resolver
uvx aptitude-resolver install "Postman Primary Skill"
uvx aptitude-resolver policy show
uvx aptitude-resolver sync

MCP Server

Aptitude ships a local MCP server for agents and MCP-compatible apps. It uses stdio by default and exposes tools for search, inspect, resolve, policy inspection, install, and sync.

For Claude Desktop-style local configuration, point the MCP client at the package entrypoint:

{
  "mcpServers": {
    "aptitude": {
      "command": "uv",
      "args": [
        "--directory",
        "C:\\Dev\\apptitude-client\\aptitude-client",
        "run",
        "aptitude-mcp"
      ]
    }
  }
}

For coding-agent clients that accept command/args MCP definitions, use the same command:

command: uv
args: --directory C:\Dev\apptitude-client\aptitude-client run aptitude-mcp

Inspect the server locally:

npx -y @modelcontextprotocol/inspector uv --directory C:\Dev\apptitude-client\aptitude-client run aptitude-mcp

Mutating MCP tools are explicit: aptitude_install_skill and aptitude_sync_lock require target paths and are annotated as destructive. Read-only tools are available for planning and review before materialization.

What Works Today

  • discovery-backed query resolution from human-readable input
  • resolver-owned candidate version selection
  • deterministic recursive dependency graph resolution
  • candidate-policy filtering and graph governance before lock generation
  • system, user, and workspace policy loading from aptitude.toml
  • hard policy CLI overrides for fresh planning
  • aptitude policy show for effective policy and config-layer inspection
  • rich lockfile generation, serialization, parsing, and replay
  • lock-driven execution plan generation
  • local materialization from either a fresh plan or an existing lockfile
  • archive-based skill installs from verified tar.zst artifacts
  • separate execution tuning for artifact downloads and local archive extraction
  • sync --lock as the lock-replay equivalent of uv sync
  • registry caching and bounded transient retry
  • additive telemetry for planning and materialization stages
  • deterministic lockfiles for identical logical inputs
  • trace output for discovery, selection, resolver, lock, and execution steps

What Is Still Incomplete

  • remote or centrally managed policy services are not implemented
  • broader organization-specific rules are not implemented yet
  • winner-vs-runner-up explanation still derives from parallel explanation logic instead of directly from reranker output
  • plugins/ extensibility is not implemented yet
  • SDK interface is not implemented yet

Selection, Governance, And Integrity Direction

The canonical architecture now defines these required semantics:

  • server provides immutable metadata such as lifecycle, trust, token, size, and checksum facts
  • resolver owns policy and candidate selection
  • governance is split into:
    • candidate-policy filtering before final ranking and final root selection
    • full graph governance after resolution and before lock generation
  • ranking compares only policy-compliant candidates
  • phase 1 checksum verification uses server-published sha256 checksum metadata and fails fast on mismatch
  • materialization verifies downloaded compressed artifact bytes before archive extraction

Current code now implements Governance Phase 1, profile-aware ranking, and explainability snapshots. The canonical source of truth for remaining evolution lives under docs/README.md.

Materialization And Execution Config

Install and sync commands are unchanged, but the payload format is now archive-based. Aptitude downloads tar.zst skill artifacts, verifies the checksum from the lock metadata, extracts safe archive members into a staging directory, and promotes the target only after all locked skills succeed.

Workspace aptitude.toml can tune materialization concurrency:

[execution]
concurrent_downloads = 8
concurrent_installs = 4

Defaults:

  • concurrent_downloads = 8
  • concurrent_installs = min(os.cpu_count() or 1, 4)

Environment overrides:

APTITUDE_CONCURRENT_DOWNLOADS=8
APTITUDE_CONCURRENT_INSTALLS=4

There are no CLI flags for these settings; they are operational config, not per-install selection options.

Current User Flows

Fresh planning and install:

install query
-> discovery
-> resolver
-> governance
-> lockfile
-> execution plan
-> materialization

Lock replay:

sync --lock aptitude.lock.json
-> lockfile parse
-> lock replay
-> execution plan
-> materialization

Example Commands

Install from a query:

aptitude install "Postman Primary Skill"

Install as JSON for automation:

aptitude install "Postman Primary Skill" --json

Inspect the complete CLI surface:

aptitude manifest

Sync from an existing lockfile:

aptitude sync --lock aptitude.lock.json

Preview the resolved graph, lock, and execution plan without materializing:

uv run python -m aptitude_resolver resolve "Postman Primary Skill"

Current Package Map

src/aptitude_resolver/
  application/
    dto/
    queries/
    use_cases/
  cache/
  discovery/
    intent/
    query_builder/
    reranking/
  domain/
    errors/
    models/
    policy/
    tracing/
  execution/
  governance/
  interfaces/
    cli/
    mcp/
  lockfile/
  registry/
  resolution/
    conflict/
    graph/
    normalizer/
    solver/
    validation/
  shared/
    config/
    logging/
  telemetry/

Current Registry Contract Used By The Resolver

The resolver currently talks to the live registry through registry/ using these runtime paths:

  • POST /discovery
  • GET /skills/{slug}/versions
  • GET /skills/{slug}/versions/{version}
  • GET /resolution/{slug}/{version}
  • GET /skills/{slug}/versions/{version}/content

The client keeps legacy fallbacks for older server deployments:

  • GET /skills/{slug}
  • GET /skills/{slug}/{version}
  • GET /skills/{slug}/{version}/content

The /content endpoint name is preserved for compatibility, but install and sync now treat that response as binary tar.zst artifact bytes rather than markdown text.

The resolver treats the server as a source of immutable facts and candidate generation only. Final ranking, version choice, solving, policy enforcement, lock generation, and execution planning remain resolver-owned.

Development

Requirements:

  • Python >=3.10

Run the CLI:

PYTHONPATH=src .venv/bin/python -m aptitude_resolver --help
PYTHONPATH=src .venv/bin/python -m aptitude_resolver install "Postman Primary Skill"
PYTHONPATH=src .venv/bin/python -m aptitude_resolver sync --lock aptitude.lock.json

Or via Python:

PYTHONPATH=src .venv/bin/python -m aptitude_resolver --help

Developer workflow:

make help
make format
make format-check
make lint
make typecheck
make test
make test-cov
make check

Source Of Truth Docs

Start with the docs index:

The canonical architecture pair for future implementation work is:

Before any non-trivial implementation or refactor, read the relevant architecture docs.

Supporting docs:

The docs/reference/openapi/ directory is kept as raw server reference material, not as the sole source of truth for runtime behavior.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aptitude_resolver-0.0.12.tar.gz (87.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aptitude_resolver-0.0.12-py3-none-any.whl (136.8 kB view details)

Uploaded Python 3

File details

Details for the file aptitude_resolver-0.0.12.tar.gz.

File metadata

  • Download URL: aptitude_resolver-0.0.12.tar.gz
  • Upload date:
  • Size: 87.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for aptitude_resolver-0.0.12.tar.gz
Algorithm Hash digest
SHA256 bc52cb2f8c70597c3cb22a379a9636a9c5b243e72f267c421d7785bf0de6c82c
MD5 683c1990db3744d91b8aad6b39c68561
BLAKE2b-256 662671a111cee46cfca95b454223b392eb34f842f1b817490347130ab0f037b1

See more details on using hashes here.

File details

Details for the file aptitude_resolver-0.0.12-py3-none-any.whl.

File metadata

  • Download URL: aptitude_resolver-0.0.12-py3-none-any.whl
  • Upload date:
  • Size: 136.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for aptitude_resolver-0.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 1b35057e7d92372ce36708e60f599b88d2e591838c1daf516f4544c0d8f1b38b
MD5 9528be406202db4f13f704b141c9b1e5
BLAKE2b-256 92456ec1ae25ef4a7494ed26ae4e8b208acc7bbf90bd20730e8b1ef7394498d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page