Skip to main content

A drop-in markdown cognition layer for AI agents that need to analyze public agenda

Project description

Agenda Intelligence MD

PyPI version License: MIT CI Status GitHub stars

A drop-in markdown cognition layer that turns news scanning into decision-ready analysis.

Bottom line: agents using this protocol move from “monitor developments” to “watch these 3 indicators; if X happens, decision Y becomes urgent.”


Who it is for

  • Policy analysts & think‑tanks – structured monitoring of regulatory and geopolitical signals.
  • Sanctions & compliance teams – evidence‑backed briefs that satisfy audit requirements.
  • Market‑risk analysts – concise impact snapshots for faster investment decisions.
  • Founders & operators – external chaos turned into clear strategic inputs.
  • AI agents – any LLM‑based system that needs to analyze public agenda without drifting into generic summaries.

Quick install

From PyPI

pip install agenda-intelligence-md

From GitHub Release (v0.5.2)

pip install https://github.com/vassiliylakhonin/agenda-intelligence-md/releases/download/v0.5.2/agenda_intelligence_md-0.5.2-py3-none-any.whl

Editable install from source

git clone https://github.com/vassiliylakhonin/agenda-intelligence-md
cd agenda-intelligence-md
pip install -e .

CLI happy path (aha‑moment in 5‑10 min)

The start command is the primary onboarding entry‑point. It prints a trimmed source plan, a brief template, and next commands.

# 1. Onboard with a source category
agenda-intelligence start technology-ai

# Output: trimmed source plan + brief template + next commands

# 2. Validate a bundled example brief
agenda-intelligence validate-brief examples/agenda-brief.json

# 3. Score a before/after markdown example (quality check)
agenda-intelligence score examples/before-after/eu-ai-act.md

Scorer status: the score command is currently a before/after evaluation harness. It works with markdown examples that contain ## Before: generic agent output and ## After: with Agenda-Intelligence.md, such as examples/before-after/*.md; it does not score arbitrary brief.json files yet.

Demo output (what you see after start):

=== Trimmed source plan ===
{
  "must_check": ["tech‑release", "policy‑update", "market‑data"],
  "watch_indicators": ["regulation draft", "enforcement action"]
}

=== Brief template (fill in) ===
{
  "bottom_line": "<summary>",
  "signal_classification": "<signal>",
  "what_changed": "<what changed>",
  "main_uncertainty": "<main uncertainty>",
  "watch_next": ["<indicator 1>", "<indicator 2>"]
}

Protocol · Source Policy · Schemas

Layer Purpose Status
Markdown protocol (Agenda-Intelligence.md) Core reasoning workflow (signal classification, watch‑next, etc.) ✅ Stable
Source Acquisition Layer Tells the agent which source types to check before making claims (sanctions, regulation, elections, conflict, etc.) ✅ Stable
JSON schemas Validate briefs, evidence packs, memory cards ✅ Stable
AnalysisBank Memory layer that stores reusable reasoning patterns from good/bad outputs ✅ Stable
Regional & Sector lenses Central Asia & Caspian, Middle East, EU; sanctions, export controls ✅ Stable

Stable today vs Experimental

✅ Stable today

  • Markdown protocol (Agenda-Intelligence.md) – core reasoning workflow.
  • JSON schemas – validation for briefs, evidence packs, memory cards.
  • CLI validationvalidate-brief, validate-evidence, validate-manifest.
  • Source planssource-plan, list-source-packs, source-types.
  • Guided startstart command prints trimmed plan + brief template.
  • Evaluation toolkitevals/rubric.md, evals/llm_judge_prompt.txt, evals/human_checklist.md, evals/cases/*.json.

🧪 Experimental / Planned

  • MCP integration – the primary adoption channel, but still a sketch (docs/integrations/mcp.md). Full implementation is on the roadmap.
  • Fetch command – stub in CLI, full evidence‑pack retrieval not implemented.
  • Scorer / evalscore command relies on eval_before_after.py (only in editable installs).
  • Generate‑brief – not yet exposed; use start + manual template fill.

Note: The project is young. Stable parts are ready for production use; experimental bits are usable for testing but may change.


Examples

  • Source‑backed briefs (with evidence mode & source plans):
    examples/source-backed/eu-ai-act.md, sanctions-routing.md, red-sea-shipping.md
  • Classic examples: examples/hormuz_strait_brief.md, eu-brief.md, central-asia-caspian-brief.md
  • Before/after: examples/before-after/ – shows the delta when the protocol is applied.

Documentation

Resource Link
Quickstart docs/quickstart.md
End‑to‑end tutorial docs/tutorial.md
Evaluation assets evals/ – rubric, LLM judge prompt, human checklist, sample cases
Use‑cases docs/use-cases/ – policy monitoring, sanctions compliance, market risk, founder context
Integrations docs/integrations/ – Claude Code, OpenAI Codex, Cursor, MCP
Roadmap ROADMAP.md
Changelog CHANGELOG.md

Repository structure

agenda-intelligence-md/
├─ src/agenda_intelligence/   # Python package
├─ schemas/                   # JSON schemas
├─ examples/                  # sample briefs, evidence packs, source‑backed examples
├─ analysis-bank/             # memory cards (failures & successes)
├─ evals/                     # evaluation rubric, LLM judge, human checklist, cases
├─ docs/                      # guides, tutorials, use‑cases, integrations
├─ skills/agenda-intelligence/ # OpenClaw skill wrapper
└─ tests/                     # pytest suite

Roadmap (high‑level)

  • v0.6 – Full MCP server exposing all tools via HTTP/WebSocket.
  • v0.7 – Complete get_protocol, list_lenses, get_lens, source_plan, score_output in MCP.
  • v0.8 – Automated CI quality gate using the evaluation toolkit.
  • v1.0 – Stable API, broader adoption‑channel support, production‑grade MCP.

Contributing

Pull requests are welcome! Please:

  1. Open an issue to discuss changes.
  2. Create a feature branch (feat/..., fix/...).
  3. Run pytest and ensure all tests pass.
  4. Update docs if behavior changes.

License

MIT – free for any use.


Why this exists

Most agent‑written news analysis is a polished recap that doesn’t change any decision. This project gives agents a stricter workflow so their output actually helps someone decide, hedge, or act.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agenda_intelligence_md-0.5.2.tar.gz (39.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agenda_intelligence_md-0.5.2-py3-none-any.whl (46.1 kB view details)

Uploaded Python 3

File details

Details for the file agenda_intelligence_md-0.5.2.tar.gz.

File metadata

  • Download URL: agenda_intelligence_md-0.5.2.tar.gz
  • Upload date:
  • Size: 39.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agenda_intelligence_md-0.5.2.tar.gz
Algorithm Hash digest
SHA256 7be051bd1d5273ac00e25e12818c2221aed54ce811e50efc49105f5cf5c90d57
MD5 f6aad592f98a53b244cfc5b3ec27c4ce
BLAKE2b-256 82dad158510f49de4778af433f5e0c018095c2e89b5ad4b44dc15153a592cd49

See more details on using hashes here.

File details

Details for the file agenda_intelligence_md-0.5.2-py3-none-any.whl.

File metadata

File hashes

Hashes for agenda_intelligence_md-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7179b679be4c5a2c7ffa0d75242be7b79cc862b363b2aa8d6e9f493250c7d89a
MD5 e3cdb4f45585a583a07d315b6bc002a6
BLAKE2b-256 3fb716cc86d3ec481b5e69b51a3324c52817ca8e98811984cb665976afa5d35a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page