Skip to main content

Codex-native scientific research expansion assistant with scholarly search, library management, and local semantic analysis

Project description

Scibudy

CI Docs Release Check

Scibudy is a Codex-native scientific research expansion assistant for scholarly search, library management, full-text ingestion, and local semantic analysis.

Scibudy combines:

  • a local MCP server for Codex
  • a shell-first CLI
  • a browser management UI
  • a layered install system for CPU-first and GPU-extended deployments

中文简介:

Scibudy 是一个面向 Codex 的科研增强助手,提供学术检索、文献库管理、全文分析和本地高质量语义检索能力。它既可以作为 MCP 工具,也可以作为独立 CLI 和本地管理界面使用。

Status

  • License: Apache-2.0
  • Release posture: stable v0.x
  • Primary platforms: Linux and macOS
  • Full local GPU path: Linux + NVIDIA first

Quick links

Installation

Before you install

For most new users, the real prerequisites are only:

  • Node.js 18+
  • Python 3.10+

Read more:

Unified installer

npx scibudy-install --profile base

Profiles:

  • base: search, library management, UI, Codex config
  • analysis: base + analysis-oriented runtime conventions
  • gpu-local: local GPU model environment and warm flow
  • full: full bootstrap for a Linux GPU workstation

Source install

git clone git@github.com:ONEMULE/scibudy.git
cd scibudy
python3 -m venv .venv
. .venv/bin/activate
python -m pip install -e .[dev]
scibudy bootstrap --profile base --install-codex

Runtime commands

Primary command aliases:

  • scibudy
  • scibudy-mcp
  • compatibility aliases: research-cli, research-mcp

Examples:

scibudy search "simulation-based calibration" --mode general
scibudy collect "simulation-based calibration" --target-dir ~/Desktop/sbc-library
scibudy analysis-settings
scibudy ingest-library <library_id>
scibudy search-evidence <library_id> calibration
scibudy profiles
scibudy workflow "calibration methods in simulation-based inference" --limit 50 --topic "calibration in simulation-based inference"
scibudy workflow "calibration methods in simulation-based inference" --dry-run
scibudy workflow "calibration methods in simulation-based inference" --quality-mode fast
scibudy security-audit
scibudy doctor --install-readiness
scibudy synthesize-library <library_id> "causal inference robustness" --profile general
scibudy synthesize-library <library_id> "calibration in simulation-based inference" --profile sbi_calibration
scibudy ui --open

For Codex and other agents, prefer the high-level workflow entrypoint when you want the whole research loop:

Use research/research_workflow with query="calibration methods in simulation-based inference", mode="general", limit=50, synthesize=true.

Use lower-level tools such as search_literature, collect_library, ingest_library, and build_research_synthesis when you need manual control over each step.

Use dry_run=true when an agent should preview writes and planned steps before executing. Use quality_mode=fast for low-cost exploration, standard for the normal workflow, and deep when missing full text or unsupported claims require stricter follow-up.

For safer agent automation, run scibudy security-audit and scibudy doctor --install-readiness before delegating long-running research workflows.

Domain profiles

Domain profiles do not limit Scibudy's search scope or providers. Search remains general and multi-source by default.

Profiles only tune full-text synthesis: section weighting, evidence markers, unsupported-claim detection, and risk flags.

  • general: default all-domain synthesis profile.
  • auto: chooses a synthesis profile from the topic while preserving general search.
  • sbi_calibration: an example preset for simulation-based inference calibration workflows.

For more examples and Codex prompt patterns:

Local model stack

The highest-quality local retrieval path currently uses:

  • Qwen/Qwen3-Embedding-4B
  • Qwen/Qwen3-Reranker-4B

Recommended workflow:

scibudy install-local-models
scibudy warm-local-models --background

See:

Repository layout

research_mcp/   Python runtime, MCP server, CLI, analysis engine
web/            UI source and built assets
bin/            npm/bootstrap entrypoints
docs/           Bilingual project documentation
examples/       Copyable usage examples
scripts/        Release and smoke-check helpers
.github/        CI, templates, automation

Open-source project standards

This repository is intentionally organized like a professional open-source library:

  • documented install profiles
  • release manifest and bootstrap state
  • contributor and support policies
  • issue/PR templates
  • CI and packaging checks
  • bilingual documentation for core user workflows

Development

Core local checks:

make test
make build-ui
make package-check
make release-check

For deeper guidance:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scibudy-0.2.3.tar.gz (184.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scibudy-0.2.3-py3-none-any.whl (109.9 kB view details)

Uploaded Python 3

File details

Details for the file scibudy-0.2.3.tar.gz.

File metadata

  • Download URL: scibudy-0.2.3.tar.gz
  • Upload date:
  • Size: 184.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for scibudy-0.2.3.tar.gz
Algorithm Hash digest
SHA256 e198a0c46b377aefcf516d05e82c0b712d93cbd5f35e522d9cf352c8a18bf468
MD5 7b83ba3ee668583694725922b6f78a31
BLAKE2b-256 58e05fd1e7b2dad92df776f7fb9794d3fb47ed0994055ff18efb9ec39f2be425

See more details on using hashes here.

File details

Details for the file scibudy-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: scibudy-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 109.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for scibudy-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0ee8a904f85a1cfea5be8c1f66ec838167aefd7cf6fd568cb14eda0038ef4024
MD5 71dca94b15f73e4d675e57910ddac54b
BLAKE2b-256 03b21734d79d96eb7448d70abcf1fb722214cff2278c8b550efbb21659cb1e77

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page