Skip to main content

Local-first documentation management for AI-assisted development

Project description

Project Ontos

CI PyPI version Python 3.9+ License GitHub

Project Ontos

Portable context for the agentic era.

Never explain twice. Own your context.

Source available at github.com/ohjonathan/Project-Ontos.


Contents


The Problem

Context dies in three ways:

  1. AI Amnesia. You explain your architecture to Claude. Then again to ChatGPT. Then again to Cursor. Each starts from zero.

  2. Prototype Graveyards. You build fast in Streamlit, make dozens of product decisions, then rewrite in Next.js. The code is new. The decisions? Lost in old chat logs.

  3. Tribal Knowledge. Your project's "why" lives in Slack threads, abandoned docs, and your head. New collaborators (human or AI) rediscover everything from scratch.

The common thread: context isn't portable. And even when it exists, you don't own it—it's locked in proprietary platforms, unexportable, unversioned, gone when you switch providers.


The Solution

Ontos creates a portable knowledge graph that lives in your repo as markdown files with YAML frontmatter. No cloud service, no vendor lock-in.

Readable, not retrievable. Your context is a glass box—inspectable by humans, followable by AIs. Explicit structure instead of semantic search. You know exactly what the AI sees.

How it works:

  1. Run ontos scaffold to auto-tag your docs with YAML headers (or add them manually if you prefer)
  2. Run ontos map to generate your project's context map
  3. Any AI agent reads the map, loads what's relevant, sees the full decision history
---
id: pricing_strategy
type: strategy
depends_on: [target_audience, mission]
---

The hierarchy (rule of thumb: "If this doc changes, what else breaks?"):

Layer What It Captures Survives Migration?
kernel Why you exist, core values ✅ Always
strategy Goals, audience, approach ✅ Always
product Features, user flows, requirements ✅ Always
atom Implementation details ⚠️ Often rewritten

Your prototype atoms get rewritten. Your product decisions don't. Interface specs and data models often survive—implementation code rarely does.


Philosophy

Intent over automation. You decide what matters. Tagging a session, connecting decisions to documents—this friction is the feature. Curation beats capture.

You own your context. Markdown in your repo, not locked in someone else's platform. No database, no account, no API key. It travels with git clone.

Shared memory over personal memory. What you remember is useless to your teammate or the AI that just opened a fresh session. Ontos encodes knowledge at the repo level. Everyone who clones gets the same brain.

Decisions outlive code. Ontos separates Space (what IS true) from Time (what HAPPENED). Your implementation atoms get rewritten on migration. Your strategy, your product decisions, your session logs—those survive.


The Premise

  • LLMs get better; your tooling should too. Ontos doesn't fight the model—it gives the model better input. As agents improve, structured context becomes more valuable, not less.
  • Platforms won't solve portability for you. Vendor lock-in is a feature, not a bug, for model providers. If you want context that moves freely, you have to own it yourself.
  • "Vibe coding" becomes "context engineering." The bottleneck isn't generating code anymore. It's giving the AI enough context to generate the right code.

Who Ontos Is For

  • Small teams (1-5 devs) switching between AI tools who are tired of re-explaining their project to Claude, then Cursor, then ChatGPT
  • Projects that outlive their prototypes—when you rewrite from Streamlit to Next.js, your decisions should survive the migration
  • Developers who want context to transfer across session resets, tool switches, and team changes
  • Anyone betting on AI-assisted development who needs reliable, portable project memory

The litmus test: Can a new person (or AI) become productive in under 10 minutes?


Use Cases

Multi-AI Workflows

Switch between Claude Code, Cursor, ChatGPT, and Gemini without re-explaining your project. Ontos can generate AGENTS.md and .cursorrules so your context activates automatically when supported.

Prototype → Production

Built a demo in Streamlit? When you rewrite in FastAPI or Next.js, your atoms are disposable but your strategy survives. Three weeks of product decisions don't vanish with the old code.

Project Handoffs

Pass a project to another developer or agency. Because you own your context, everything travels with git clone—session logs, context map, decision history. No export wizard, no platform migration, no 2-hour call.

Native IDE Integration (MCP)

Run ontos serve to start an MCP server that exposes your knowledge graph directly to AI agents. Claude Desktop, Cursor, and other MCP-compatible IDEs connect natively — no CLI parsing, no context map re-reads, just structured tool calls with live cache invalidation.

{
  "mcpServers": {
    "ontos": {
      "command": "ontos",
      "args": ["serve"],
      "cwd": "/path/to/your/project"
    }
  }
}

Documentation Health

CI validation catches broken links, circular dependencies, and architectural violations before they become tribal knowledge buried in someone's head.

Re-Architecture & Decision Extraction

Rewriting an app in a new stack? Export your entire knowledge graph as structured JSON and feed it to an LLM:

ontos export data --json > project_export.json

The export includes every document's content, dependencies, type hierarchy, and graph edges. Decisions live in the document bodies (## Key Decisions, ## Alternatives Considered), not in separate metadata fields—so the full reasoning context travels with the export.

Give the JSON to any LLM with a prompt like:

"Extract all key decisions, alternatives rejected, and their rationale from these documents. Group by component. Flag any decisions that would need to be revisited for a migration from [current stack] to [target stack]."

Your atoms get rewritten. Your decisions don't have to.


Quick Start

Requirements: Python 3.9+, inside a git repository

Install (recommended):

# pipx installs in an isolated environment and adds to PATH automatically
pipx install ontos

[!TIP] Don't have pipx? Install it with brew install pipx (macOS) or pip install pipx. See pipx docs.

Alternative install:

pip install ontos

[!TIP] For MCP server mode (native AI IDE integration): pipx install 'ontos[mcp]' or pip install 'ontos[mcp]'. Requires Python 3.10+. See the Migration Guide v3→v4.

[!NOTE] "command not found: ontos"? Your Python scripts directory may not be on PATH.

  • Quick fix: Use python -m ontos instead (e.g., python -m ontos map)
  • Permanent fix: Add Python's bin directory to your PATH (the pip install output shows the location)

Source available at github.com/ohjonathan/Project-Ontos.

Initialize:

cd your-project
ontos init

This creates:

  • .ontos.toml configuration file
  • docs/ directory with full type hierarchy (kernel/, strategy/, product/, atom/, logs/, reference/, archive/)
  • Ontos_Context_Map.md document graph
  • Git hooks (optional)
  • AGENTS.md for AI agent activation (optional)

Scaffold existing docs: If you have existing markdown files, init will prompt to add Ontos metadata:

ontos init --scaffold    # Auto-scaffold docs/ without prompting
ontos init --no-scaffold # Skip scaffold prompt entirely

Activate: Tell any AI agent that supports Ontos activation:

"Ontos" (or "Activate Ontos")

If configured, the agent reads AGENTS.md, regenerates the context map, loads relevant files, and confirms what context it has.


Workflow

Agent Prompts

Use these phrases with an AI agent that supports Ontos activation. They are not shell commands.

Command What It Does
"Ontos" Activate context—agent reads the map, loads relevant files
"Archive Ontos" End session—save decisions as a log for next time
"Maintain Ontos" Health check—scan for new files, fix broken links, regenerate map
# CLI equivalents
ontos scaffold     # Auto-tag docs with YAML frontmatter
ontos map          # Generate/update context map
ontos log          # Create a session log
ontos doctor       # Check graph health
ontos maintain     # Run weekly maintenance (9 tasks)
ontos link-check   # Scan for broken references
ontos rename       # Safe ID rename across graph
ontos promote      # Promote docs to Level 2
ontos agents       # Regenerate AGENTS.md and .cursorrules
ontos serve        # Start MCP server for IDE integration

Compact context maps for token-constrained agents:

ontos map --compact           # Minimal one-line-per-doc output
ontos map --compact rich      # With summaries
ontos map --compact tiered    # Prose summary + type-ranked compact

Update: pipx upgrade ontos or pip install --upgrade ontos


Best Practices

  • Start from the top. Define kernel and strategy before creating atoms. The hierarchy exists for a reason.
  • Curate, don't hoard. Not every session needs a log. Archive the ones with decisions that matter.
  • Review scaffold output. Auto-tagging proposes; you decide. The human judgment is the point.
  • Run ontos doctor periodically. Catch broken links and dependency issues before they compound.
  • Scan for secrets before release. Use gitleaks detect and trufflehog git file://. --no-update (see .trufflehog-exclude-paths.txt).

What Ontos Is NOT

  • Not a RAG system. We use structural graph traversal, not semantic search. Concepts are curated tags, not vector embeddings. Deterministic beats probabilistic for critical decisions.
  • Not zero-effort. You decide what matters (curation). The tooling handles the paperwork (tagging, validation, map generation).
  • Not a cloud service. Markdown files in your repo. No API keys, no accounts.
  • Not magic. The graph and map are deterministic—same input, same output. What the AI does with that context is still AI.

If you want automatic context capture, use a vector database. If you want reliable, portable, inspectable context, use Ontos.


FAQ

Why does Ontos start at version 3?

Versions 1 and 2 were internal. I built Ontos as a personal tool to manage context across AI sessions and tech stack migrations. After using it for months and seeing others struggle with the same problems—re-explaining projects to each new AI, losing decisions when prototypes get rewritten—I packaged the workflow as a Python library.

Version 3 is when Ontos became public. The earlier versions live on in the design decisions and battle-tested workflows, just not in a public release.


Roadmap

Version Status Highlights
v4.0.0 ✅ Current MCP server mode — 8 tools for native AI IDE integration
v4.1 Next Portfolio index, cross-project tools, HTTP/SSE transport

v3.0 transformed Ontos from repo-injected scripts into a pip-installable package. v3.1 made all CLI commands native Python. v3.2 added re-architecture support, environment detection, and activation resilience. v3.3 ships 62 audit-derived hardening fixes plus link-check, rename, unified JSON envelopes, and a canonical document loader. v3.3.1 reduced link-check false positives by 89% and added promote_check to the maintenance pipeline. v3.4 adds --compact tiered context maps for token-constrained agents. v4.0 adds an MCP server mode with 8 read-only tools, enabling native integration with AI IDEs like Claude Desktop and Cursor without CLI overhead.


Documentation

Note: Documentation links below point to the latest source on GitHub and may reflect features not yet released.


Feedback

Issues and feature requests welcome via GitHub Issues.


License

Apache-2.0. See LICENSE.

Why Apache-2.0?

Ontos exists because context should be portable and owned by you. A restrictive license would contradict that philosophy.

You can use, modify, distribute, and build commercial products on Ontos. The requirements: include the license text, keep copyright notices, and note significant changes if you redistribute. Apache-2.0 also includes a patent grant from contributors (though not from unrelated third parties).

Contributing

Issues and PRs welcome. If you're planning something substantial, open an issue first so we can align on direction.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ontos-4.0.0.tar.gz (204.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ontos-4.0.0-py3-none-any.whl (197.3 kB view details)

Uploaded Python 3

File details

Details for the file ontos-4.0.0.tar.gz.

File metadata

  • Download URL: ontos-4.0.0.tar.gz
  • Upload date:
  • Size: 204.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ontos-4.0.0.tar.gz
Algorithm Hash digest
SHA256 c98e266bba7a1eac8cd4e55b82a5311a6d351df6a73466806dd43955f075d33e
MD5 6661d9b7ea11b84466513ec9c8dbf310
BLAKE2b-256 15bee88793947c49f543070e1a0387bb1c4f3b0bf12ebc63927460c991e4a746

See more details on using hashes here.

Provenance

The following attestation bundles were made for ontos-4.0.0.tar.gz:

Publisher: publish.yml on ohjonathan/Project-Ontos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ontos-4.0.0-py3-none-any.whl.

File metadata

  • Download URL: ontos-4.0.0-py3-none-any.whl
  • Upload date:
  • Size: 197.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ontos-4.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 490db4e70e573e02124dd92dedce8fe29f6cd1c3cf7b92952765d82d3e790729
MD5 9b54d42948431d524dfb98439e5c830f
BLAKE2b-256 cb8162760c6a46664da07c22463deeda6002da5e8368d3b1da9f962133600fde

See more details on using hashes here.

Provenance

The following attestation bundles were made for ontos-4.0.0-py3-none-any.whl:

Publisher: publish.yml on ohjonathan/Project-Ontos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page