Skip to main content

Transactional memory + skill artifacts for AI agents

Project description

Agent Artifacts: Transactional Memory and Skill Artifacts for AI Agents

For full documentation, examples, and MCP client guides, see the GitHub repo. Agent Artifacts is an open-source memory correctness + procedural skill + auditability layer for AI agents. It makes agents more reliable by turning "memory" into structured artifacts that can be validated, versioned, and replayed.

Most agent systems can generate fluent responses. Agent Artifacts helps them remember safely, reuse skills, and explain decisions.


What You Get

  • Transactional memory so hallucinations don't become permanent facts.
  • Skill artifacts (prompt + workflow) that are versioned, tagged, and reusable.
  • Decision traces that make agent behavior auditable and debuggable.
  • Bounded prompt overhead with global injection caps.
  • MCP + adapter integrations so teams don't need to switch stacks.

Agent Artifacts is a plug-in layer, not a full agent framework. Use only what you need: memory, skills, or traces can be adopted independently.


Quickstart (2 Minutes)

Install and import a prompt skill:

pip install agent-artifacts
agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"

More: Quickstart guide and examples index.


Core Capabilities (Why It Matters)

1) Transactional Memory

  • Stage -> validate -> commit (or rollback) memory writes.
  • Prevents "false facts" from sticking in production.
  • Details: validation policy and memory types.

2) Skill Artifacts (Procedural Memory)

3) Decision Traces

  • Structured logs for "why did the agent do that?"
  • Supports replay and regression debugging.
  • Details: memory redaction (privacy) and trace CLI examples below.

4) Bounded Context Overhead

  • Injection is capped by default (AdapterPipeline(max_injected_tokens=1000)).
  • Keeps prompts predictable vs dumping full histories.
  • Details: context budgeting.

When Agent Artifacts Shines

  • You maintain a library of agent prompts and want versioning + metadata.
  • You run repeatable workflows (deploys, QA checks, data extraction).
  • You need auditability for production agent behavior.
  • You want bounded context overhead instead of raw history dumps.

Benefits & Use-Cases

See benefits and use-cases for persona-based examples (solo devs, vibe coders, production teams) and modular adoption guidance.


Integrations


Docs (Start Here)

Documentation index: Docs (start here)


CLI Quickstart

agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"

Full CLI + injection examples: CLI reference.

Skill tool integration (callable by LLMs)

Expose skills as tool/function definitions and execute tool calls:

from agent_artifacts.skills import (
    SkillToolConfig,
    SkillToolRegistry,
    SkillQueryConfig,
    execute_tool_call,
)

registry = SkillToolRegistry.from_storage(
    storage,
    query=SkillQueryConfig(tags=["stable"]),
    config=SkillToolConfig(name_strategy="name_version"),
)

tool_defs = registry.definitions()  # pass to your LLM runtime as tool/function specs

# ...when the model calls a tool:
result = execute_tool_call(storage, tool_name, tool_arguments, registry=registry)
print(result.to_dict())

Provider-specific tool adapters and SDK call examples live in tool adapters.

Tools quickstart:

  • Build tool definitions with SkillToolRegistry.from_storage(...)
  • Convert them for your provider via to_openai_tools / to_anthropic_tools / to_gemini_tools
  • Execute tool calls with execute_tool_call(...)
  • Auto-run from model responses with auto_execute_with_model(...) (see tool adapters)

Runnable demo (no external SDKs required):

python examples/skills/tool_adapters_demo.py

MCP server (stdio)

Expose skills + memory + traces via Model Context Protocol:

agent-artifacts-mcp --backend sqlite --db ~/.agent-artifacts/agent-artifacts.db

HTTP/SSE transport (optional):

agent-artifacts-mcp-http --host 127.0.0.1 --port 8001 --backend sqlite --db ~/.agent-artifacts/agent-artifacts.db

MCP quickstart (60 seconds)

Copy/paste MCP client config (Cursor, Claude Desktop, etc.):

{
  "mcpServers": {
    "agent-artifacts": {
      "command": "agent-artifacts-mcp",
      "args": ["--backend", "sqlite", "--db", "/path/to/agent-artifacts.db"]
    }
  }
}

60-second smoke test (HTTP): see MCP HTTP demo.

See MCP server docs for tool inventory and request/response examples. Client setup: MCP clients. Cursor quickstart config: Cursor config template and Cursor guide. Windsurf and Claude guides: Windsurf guide and Claude guide. Compatibility matrix and example app: MCP compatibility and MCP examples.

Prompt skills surfaced via MCP include argument metadata. If your skill inputs use JSON Schema fields like description (or title), MCP clients can render richer prompt UIs:

inputs:
  text:
    type: string
    description: Text to summarize.
# Decision traces + audit journal
agent-artifacts trace log --decision execute_skill --skill-ref deploy_fastapi@1.0.0 --reason "deploy requested" --confidence 0.9 --result success --tx-id <tx_id>
agent-artifacts trace query --decision execute_skill --limit 50
agent-artifacts trace query --skill-ref deploy_fastapi@1.0.0 --result success --created-after 2026-01-01T00:00:00Z --correlation-id corr-123
agent-artifacts journal query --tx-id <tx_id> --limit 50 --show-payload
agent-artifacts journal query --tx-id <tx_id> --limit 10 --format json
agent-artifacts replay --tx-id <tx_id> --limit 50 --show-payload

CLI run retries/timeouts:

agent-artifacts run deploy_fastapi@1.0.0 --inputs "{\"repo_path\": \".\", \"retries\": 1}" --max-attempts 3 --retry-on timeout,failure --backoff-ms 250,500 --total-timeout-s 60 --step-timeout-s 10 --idempotency-key deploy-2026-01-27-001 --trace-inputs-preview --trace-output-preview --trace-preview-max-chars 120

Config file (optional)

Configuration can be stored in ~/.agent-artifacts/agent-artifacts.yaml (or AGENT_ARTIFACTS_CONFIG) with precedence: CLI args > env vars > config file > defaults.

Starter template: config template.

storage:
  backend: sqlite
  db: ~/.agent-artifacts/agent-artifacts.db
  backend_config: {}

Inspect the resolved configuration and sources:

agent-artifacts config show --format json

Storage + Python API

Storage service, Postgres backend setup, and programmatic API examples live in: storage service docs and Python API.


Examples


Contributing

Contributions are welcome. See contributing guide.

Open items we would love help with:

  • LeTTA / memU / ReMe adapters
  • Adapter compatibility notes + deprecation policy
  • Adapter conformance tests in CI
  • Memory pollution benchmark + trace replay regression tests

FAQ

Q: Is this just another RAG memory system?

No. Agent Artifacts focuses on:

  • transactional memory correctness
  • procedural skills as artifacts
  • decision trace auditability

Q: Why "Agent Artifacts"?

Because it describes the core idea: reusable agent behavior stored as versioned artifacts.


License

MIT. See LICENSE.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_artifacts-0.1.1.tar.gz (109.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_artifacts-0.1.1-py3-none-any.whl (130.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_artifacts-0.1.1.tar.gz.

File metadata

  • Download URL: agent_artifacts-0.1.1.tar.gz
  • Upload date:
  • Size: 109.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for agent_artifacts-0.1.1.tar.gz
Algorithm Hash digest
SHA256 abccc8468be2485429652aba3419c934060d87867633352993196ecfaf694405
MD5 5c6ec9735e63e671aabca7f1b0dd7f34
BLAKE2b-256 cfb9d332e51744881de4cc55d7980c0eed4178e2aed7f8036e5b9272126d333c

See more details on using hashes here.

File details

Details for the file agent_artifacts-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_artifacts-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 03346ce61eca9120e770e9dca6ebaa8bfc6ad306135f2bb146d5b32618cbe46a
MD5 35f180bff428bc27719e8e73e0d0ee69
BLAKE2b-256 d89c59635131dd37338a3854e7ff2ea9570ae06b88f3bb10d5424f36f0152fe5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page