Skip to main content

Agent-driven research knowledge base. Browse, collect, and synthesize web sources into a searchable wiki.

Project description

Hyperresearch

Agent-driven research knowledge base. Install it, and your AI coding agent can collect, search, and synthesize web research into a persistent, searchable wiki — across sessions.

pip install hyperresearch
hyperresearch install    # init vault + hook your agent

That's it. Your agent now checks the research base before searching the web, saves useful findings automatically, and builds a knowledge graph over time.

How it works

  1. Agent finds something useful (via its own web search, browsing, or your input)
  2. Agent saves it: hyperresearch fetch "https://..." --tag ml -j or hyperresearch note new "Title" --body-file content.md -j
  3. Next time it needs info, the PreToolUse hook reminds it: "check hyperresearch first"
  4. Agent searches the KB: hyperresearch search "attention mechanisms" -j
  5. Knowledge compounds across sessions — no redundant fetches, no lost context
your-repo/
  .hyperresearch/        # Hidden: config, SQLite index, hook script
  research/
    notes/               # Markdown notes (source of truth)
    index/               # Auto-generated wiki pages
  CLAUDE.md              # Agent docs (auto-injected)

Commands

# Setup
hyperresearch install                        # Init + hooks (Claude Code, Cursor, Codex, Gemini)
hyperresearch install --platform all         # Hook all supported platforms

# Collect
hyperresearch fetch <url> --tag t -j         # Save a URL as a note
hyperresearch research "topic" --max 5 -j    # Search → fetch → link → synthesize (needs crawl4ai)

# Search & read
hyperresearch search "query" -j              # Full-text search
hyperresearch note show <id> -j              # Read a note
hyperresearch note list --tag ml -j          # List notes by tag

# Manage
hyperresearch sources list -j                # What URLs have been fetched
hyperresearch sources check <url> -j         # Has this URL been fetched?
hyperresearch repair -j                      # Fix links, promote notes, rebuild indexes
hyperresearch status -j                      # Vault health overview

Every command returns {"ok": true, "data": {...}} with -j.

Agent integration

hyperresearch install does three things:

  1. Creates the vault (.hyperresearch/ + research/)
  2. Injects usage docs into CLAUDE.md (or AGENTS.md, GEMINI.md, copilot-instructions.md)
  3. Installs PreToolUse hooks that fire before web searches:
Platform Hook Trigger
Claude Code .claude/settings.json Before Glob, Grep, WebSearch, WebFetch
Codex .codex/hooks.json Before Bash
Cursor .cursor/rules/hyperresearch.mdc Always-apply rule
Gemini CLI .gemini/settings.json Before tool calls

The hook doesn't block — it reminds the agent to check the research base first.

Web providers

By default, agents use their own web tools (WebSearch, WebFetch) and pipe content into hyperresearch. For JS-rendered pages, blocked sites, or authenticated content, install crawl4ai (local headless Chromium):

pip install hyperresearch[crawl4ai]
crawl4ai-setup                        # Install browser (one-time)

Configure in .hyperresearch/config.toml:

[web]
provider = "crawl4ai"    # or "builtin" (stdlib urllib, no JS)
profile = ""             # Browser profile name for authenticated crawling (optional)
magic = false            # Anti-bot stealth mode (recommended for social media)

Authenticated crawling

Access login-gated content (LinkedIn, Twitter, paywalled sites) by creating a login profile:

hyperresearch setup       # Choose option 1 — browser opens, log into your sites, done
# Or manually:
crwl profiles             # Create profile, log in, press q when done
# .hyperresearch/config.toml
[web]
profile = "research"      # Your profile name

MCP server

For Claude Desktop, Cursor inline, or any MCP-compatible agent:

pip install hyperresearch[mcp]
{"mcpServers": {"hyperresearch": {"command": "hyperresearch", "args": ["mcp"]}}}

10 tools: search_notes, read_note, read_many, list_notes, get_backlinks, get_hubs, vault_status, lint_vault, check_source, list_sources.

Philosophy

  • The agent IS the LLM — hyperresearch is a dumb tool that stores, indexes, and searches. It never calls an LLM.
  • Files are truth — markdown notes survive the tool dying. SQLite is a rebuildable cache.
  • Agents already have web access — hyperresearch is where they store what they find, not how they find it.
  • Check before you fetch — the hook system prevents redundant web searches across sessions.

Requirements

  • Python 3.11+
  • Works on Windows, macOS, Linux

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hyperresearch-0.2.0.tar.gz (106.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hyperresearch-0.2.0-py3-none-any.whl (127.2 kB view details)

Uploaded Python 3

File details

Details for the file hyperresearch-0.2.0.tar.gz.

File metadata

  • Download URL: hyperresearch-0.2.0.tar.gz
  • Upload date:
  • Size: 106.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for hyperresearch-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e3961bf0ee77a3f2eb04ed800c1683292ef2ebbd184dc8bdb6021f9bc86266be
MD5 c0e971cf6050c6c128a2f83e7ee6a35d
BLAKE2b-256 b8e980931c889a8e012ae28dc67eceb9f0b87fdca28b767abb3b1c8a5ba97564

See more details on using hashes here.

Provenance

The following attestation bundles were made for hyperresearch-0.2.0.tar.gz:

Publisher: publish.yml on jordan-gibbs/hyperresearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hyperresearch-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: hyperresearch-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 127.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for hyperresearch-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7656925f1a6f71a552801a73ec18d62968012c9327fb3063c993a84a86bd0242
MD5 27eda8b45152f6bd6814d4fe5e293f1e
BLAKE2b-256 e751b5fd881eebfddb6d433256ddfa971306880d3782b9bbcab78717b9395fbb

See more details on using hashes here.

Provenance

The following attestation bundles were made for hyperresearch-0.2.0-py3-none-any.whl:

Publisher: publish.yml on jordan-gibbs/hyperresearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page