Skip to main content

AI-powered autonomous novel generation system — install with 'pip install novelscribe', run with 'scribe'

Project description

NovelScribe

CI PyPI version Python versions License

An AI-powered autonomous novel generation agent. Give it a concept, pick a genre, and it writes a full novel — outline, characters, world-building, chapters, and all.

Language / 语言 / 語言: English | 简体中文 | 繁體中文


Install

pipx install novelscribe

That's it. The scribe command is now available.

Why pipx? On macOS and many Linux distros, pip install is blocked by the system Python (externally-managed-environment error). pipx installs into an isolated virtualenv, avoiding this entirely. Install pipx first if you don't have it: brew install pipx && pipx ensurepath (macOS) or sudo apt install pipx && pipx ensurepath (Linux).

Other install methods
pip install --user novelscribe    # user-level (may warn on macOS)

Or the one-liner script (auto-detects pipx/pip):

curl -sSL https://raw.githubusercontent.com/huangjien/wst/main/novel_harness/install.sh | bash

30-Second Demo

# Create a new novel project
scribe new my-novel

# Run fully autonomous generation (outline → characters → world → chapters)
scribe autonomous start my-novel

# Check progress
scribe help

What It Does

NovelScribe chains 9 specialized AI agents into an autonomous pipeline:

discuss → genre → characters → world-building → outline → chapter-planning → writing → continuity → editing

Each agent is a Markdown file with a system prompt and model config. The orchestrator runs them step-by-step through YAML-defined workflows, with verification loops that auto-fix issues until quality passes.

Key Features

  • Autonomous generation — one command produces a complete novel
  • 9 specialized agents — writer, editor, character designer, world-builder, continuity checker, etc.
  • Verification loops — auto-detects and fixes plot holes, inconsistencies, and quality issues
  • Self-improving engine (v3.0) — closed-loop learning: generation feedback → consistency guardian → graph-aware generation → adaptive template selection → internet knowledge harvesting
  • Interactive revision (v3.1) — post-phase quality control with RevisionLoop: LLM-powered revision rounds on creative phases, git-tracked history, scribe autonomous start --interactive-revision
  • Hook lifecycle framework (v2.2) — YAML-declared hooks across workflow/runtime with required vs optional semantics, strict guardrails for builtin.http / builtin.shell, and fail-closed external plugin loading
  • Resilience + orchestration upgrades (v2.3) — step retry/timeout, hook when conditions, runtime transient retry, DAG parallel execution with rollback, exponential+jitter backoff, checkpoint pruning, and trigger API
  • Reliability hardening (v2.4) — durable trigger store (journal+snapshot), HMAC webhook security, async dispatch with retry/DLQ semantics, strict-mode runtime gates, event-driven checkpoint decisions, docs consistency guard
  • Governance & schema maturity (v2.5) — secrets management with ${secrets.*} resolution, reusable step templates, workflow-level timeout, output schema validation, versioned artifact management, human-in-loop default, language-consistent output (en/zh-CN/zh-TW)
  • Quality gates — Bronze / Silver / Gold / Platinum tiers with metrics and benchmarks
  • Parallel execution — runs independent agents concurrently for 2-4x speedup
  • LLM streaming — real-time token streaming on all 8 providers
  • Circuit breakers — automatic failover between LLM providers on persistent failures
  • Schema versioning — forward-compatible agent/workflow definitions with migration support
  • Run logging — structured JSONL logs with scribe logs run <project> command
  • Scales to 3M words — hierarchical summaries, memory optimization, chapter pagination, phase-level caching
  • Multi-language — English, 简体中文, 繁體中文 (with fallback)
  • 8 LLM providers — OpenAI, Claude, Gemini, Ollama, LM Studio, MiniMax, ZhipuAI (z.ai), Moonshot (Kimi)
  • AI Chat assistantscribe help <question> for instant answers about novel writing and scribe usage (interactive: scribe help)
  • Model switchingscribe model list/switch/current to quickly list, switch, and inspect AI providers and models
  • ONNX-accelerated search — semantic KB retrieval with ONNX Runtime for 3-5x faster model loading

Usage

Create a Novel

scribe new my-novel --provider openai --model gpt-4

Run Autonomous Generation

scribe autonomous start my-novel

Run Individual Phases

# Generate the outline
scribe run workflow outline --project my-novel

# Run a single agent
scribe run agent writer_agent --project my-novel

# Preview without executing
scribe run workflow writing --project my-novel --dry-run

Explore Agents & Workflows

scribe agents list
scribe agents show writer_agent

scribe workflows list
scribe workflows show outline

Templates

scribe templates list                          # List all templates + KB stats
scribe templates list character                # List KB character templates
scribe templates list character "male, brave"  # Search KB character templates
scribe templates show heros_journey --category plot
scribe templates kb search "hero journey"      # Search template knowledge base
scribe templates kb stats                      # Show KB statistics
scribe templates create plot -n "My Structure" -d "Description" -b "0:Hook:Description"

Semantic search: Install with uv sync --extra search for embedding-based KB retrieval. ONNX Runtime is included for 3-5x faster model loading.


### Knowledge Harvesting

```bash
scribe kb harvest "three-act structure"               # Harvest knowledge from the internet
scribe kb harvest "character archetypes" --kind character_blueprint
scribe kb harvest "world building tips" --lang en --max-sources 5

### Setup & Configuration

```bash
scribe setup interactive                      # Interactive setup wizard
scribe setup apikey openai sk-...             # Configure API key
scribe setup apikey ollama                    # Configure local provider (no key needed)
scribe setup apikey openai sk-... --base-url  # Custom base URL (proxy support)
scribe setup language zh-CN                   # Set default language
scribe setup install-skill                    # Install IDE chatbot skill
scribe setup verify                           # Verify installation

Model Management

scribe model list                             # List all providers & models
scribe model switch anthropic                 # Switch provider (uses default model)
scribe model switch openai gpt-4o-mini        # Switch to specific model
scribe model current                          # Show active provider & model

Checkpoint Management (Human-in-the-Loop)

scribe checkpoint list my-novel               # List checkpoints
scribe checkpoint show my-novel               # Show current proposal
scribe checkpoint approve my-novel            # Approve checkpoint
scribe checkpoint modify my-novel             # Modify and approve
scribe checkpoint reject my-novel             # Reject checkpoint
scribe checkpoint history my-novel            # Show decision history
scribe checkpoint compare my-novel            # Compare all alternatives

Proactive Suggestions

scribe proactive enable                       # Enable proactive suggestions
scribe proactive disable                      # Disable proactive suggestions
scribe proactive status                       # Show configuration
scribe proactive list                         # List available suggestions
scribe proactive history                      # Show acceptance history
scribe proactive review                       # Review pending suggestions
scribe proactive promote <id>                 # Promote suggestion to global KB

Project Management

scribe project list                           # List all novel projects

Version & Updates

scribe version info              # current version
scribe version check             # check PyPI for updates
scribe version update            # self-update to latest
scribe version update --yes      # skip confirmation

AI Chat Assistant

Ask questions about scribe usage or novel writing craft — no quotes needed:

scribe help how to create a new character?
scribe help what agents are available?
scribe help --provider anthropic how to build a plot?

# Interactive multi-turn conversation
scribe help

The AI reads all agent definitions and project documentation to give contextual answers. Supports English, 简体中文, and 繁體中文.

Run Logs

scribe logs run <project>        # view structured run logs

Character Archetypes

scribe character archetypes list              # list all built-in archetypes
scribe character archetypes show hero         # view archetype details
scribe character convert hero "John Doe"      # convert archetype to character
scribe character convert rebel "Jane" --project my-novel  # add to project

Export

scribe export manuscript my-novel --format pdf     # export as PDF
scribe export manuscript my-novel --format epub    # export as EPUB
scribe export manuscript my-novel --format docx    # export as DOCX
scribe export manuscript my-novel --format md      # export as Markdown
scribe export manuscript my-novel --format pdf --output-dir ./out/  # custom output directory

Review

scribe review start my-novel                  # run review pipeline
scribe review report my-novel                 # generate review report
scribe review formats                         # list report export formats

Quality Assurance

scribe qa analyze my-novel                    # quality metrics analysis
scribe qa validate my-novel                   # run validation rules
scribe qa benchmark my-novel                  # benchmark against standards
scribe qa gate-check my-novel --chapter 1 --level gold  # quality gate check

Parallel Execution

scribe parallel plan my-novel                 # plan parallel execution
scribe parallel execute my-novel              # execute with concurrency
scribe parallel benchmark                     # benchmark parallel speedup
scribe parallel stats                         # show execution statistics

NovelScribe checks for updates on startup (once per 24h) and shows a notification when a new version is available.

Multi-Language Support

Language Code
English en
Simplified Chinese zh-CN
Traditional Chinese zh-TW

Set per-project in META.yaml:

project_name: my_novel
language: zh-CN
genre: fantasy
target_word_count: 80000

Or per-command:

scribe --lang zh-TW agents list

Language resolution priority: --lang flag > META.yaml > user config > SCRIBE_LANGUAGE env > system locale > English fallback.

Configuration

Set your LLM provider:

# Interactive setup
scribe config get

# Or set directly
scribe config set language zh-CN
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

Supported providers:

Provider Env Var Notes
OpenAI OPENAI_API_KEY Best quality
Anthropic ANTHROPIC_API_KEY Long context
Google Gemini GOOGLE_API_KEY Free tier available
Ollama Local, no API key needed
LM Studio Local, no API key needed
MiniMax MINIMAX_API_KEY OpenAI-compatible
ZhipuAI (z.ai) ZHIPUAI_API_KEY GLM-4 models
Moonshot (Kimi) MOONSHOT_API_KEY 128K context

Architecture

novel_harness/
├── cli/                   # CLI package (entry point 'scribe')
│   ├── _entry.py          # Click command registration
│   ├── _commands/         # Command modules (agents, workflows, run, autonomous, etc.)
│   ├── _helpers/          # Shared helpers (error handling, output, perf)
│   ├── _memory/           # Memory KB commands
│   └── _templates/        # Template commands (create, kb)
├── services/              # Business logic layer (v2.9+)
│   ├── _base.py           # BaseService with project validation
│   ├── project_service.py # Project CRUD
│   ├── workflow_service.py# Workflow loading & execution
│   ├── autonomous_service.py # Autonomous pipeline control
│   ├── quality_service.py # Quality gate & analysis
│   ├── draft_service.py   # Draft management
│   ├── checkpoint_service.py # Checkpoint operations
│   ├── export_service.py  # Multi-format export
│   ├── memory_service.py  # Memory operations
│   ├── summary_service.py # Chapter summaries
│   └── settings_service.py# Setup & configuration
├── core/                  # Core engine
│   ├── orchestrator.py    # Central coordinator
│   ├── llm_client.py      # Multi-provider LLM abstraction (8 providers, streaming)
│   ├── circuit_breaker.py # Per-provider circuit breakers with auto-fallback
│   ├── optimizing_llm_client.py # LLM call wrapper with cache + cost tracking
│   ├── workflow_executor.py # YAML-defined workflow execution with hooks
│   ├── autonomous_executor.py # End-to-end pipeline facade
│   ├── autonomous_phase_runner.py # Phase execution with CheckpointFlow + RevisionLoop
│   ├── autonomous_pipeline_builder.py # Dependency assembly builder
│   ├── revision_loop.py   # Post-phase interactive revision (v3.1)
│   ├── runtime/           # Runtime kernel
│   │   ├── kernel.py      # RuntimeKernel facade (public API)
│   │   ├── _agent_runtime.py # AgentRuntime turn loop (private)
│   │   ├── models.py      # RuntimeTask, RuntimeState, RuntimeOutcome
│   │   ├── prompt_assembler.py # Fixed-order prompt assembly
│   │   ├── memory_bridge.py   # Memory → context injection bridge
│   │   ├── tool_protocol.py   # Tool registry & executor
│   │   ├── guardrails.py      # Input/output/tool validation
│   │   ├── turn_checkpoint.py # Turn-level checkpoint store
│   │   └── verification_hooks.py # Verification feedback suite
│   ├── hooks/             # Shared hook event bus + built-in handlers + plugin manager
│   ├── memory/            # Memory system
│   │   ├── conversation_memory.py  # Session journal (JSONL)
│   │   ├── knowledge_graph.py      # Entity relationship graph + consistency checks
│   │   ├── generation_learning.py   # Adaptive learning
│   │   ├── consistency_checker.py   # Graph consistency validation
│   │   ├── novel_wiki.py          # Obsidian-compatible markdown wiki
│   │   └── entity_extractor.py    # Structured-first entity extraction
│   ├── memory_bus.py      # Memory system facade (unified index/retrieve API)
│   ├── context_bus.py     # Context injection with optional memory integration
│   ├── knowledge_harvester.py     # Internet knowledge harvesting (DuckDuckGo)
│   ├── kb_context_advisor.py      # Adaptive template selection advisor
│   ├── template_outcome_tracker.py # Template outcome tracking & weight adjustment
│   ├── quality_gate.py    # Quality gate evaluation (Bronze/Silver/Gold/Platinum)
│   ├── quality_report.py  # Quality report generation
│   ├── template_kb.py     # Global template knowledge base with semantic search
│   ├── trigger_store.py   # Append-only journal + snapshot trigger persistence
│   ├── trigger_manager.py # Trigger registration, webhook security, async dispatch
│   └── ...
├── agents/                # Agent definitions (Markdown, schema_version: 1)
├── workflows/             # Workflow definitions (YAML, schema_version: 1)
├── api/                   # FastAPI server with WebSocket streaming (optional)
└── tests/                 # Test suite (8259 tests, 100+ files, 98% coverage)

LLM Providers

NovelScribe supports any OpenAI-compatible API. Configure via environment variables or scribe config setup:

# .env file
OPENAI_API_KEY=sk-...
LLM_PROVIDER=openai
LLM_MODEL=gpt-4-turbo-preview

Memory System

NovelScribe includes a multi-layered memory system that tracks knowledge graph entities, conversation sessions, and learned patterns:

# View memory system status for a project
scribe memory kb status my-novel

# List knowledge wiki pages
scribe memory kb wiki-list my-novel

# View a wiki page
scribe memory kb wiki-view my-novel character

# Run knowledge base consistency check
scribe memory kb consistency-check my-novel

# Visualize entity relationships as ASCII graph
scribe memory kb graph-visualize my-novel

# View session history
scribe memory kb sessions my-novel

# View learning summaries
scribe memory kb learning my-novel

Components:

  • Knowledge Graph — entity relationships (characters, locations, events) with consistency checking
  • Knowledge Wiki — structured wiki pages with auto-generated content
  • Conversation Memory — append-only JSONL session journal for generation sessions
  • Generation Learning — adaptive learning from generation patterns
  • Consistency Guardian — proactive detection of timeline conflicts, character inconsistencies, unresolved foreshadowing
  • Template Outcome Tracker — tracks template effectiveness, adjusts selection weights over time
  • Knowledge Harvester — internet knowledge harvesting via DuckDuckGo search, auto-creates templates
  • ONNX-Accelerated Search — semantic embedding index with ONNX Runtime backend

API Server (Optional)

For web UI integration or programmatic access:

# If installed with pipx
~/.local/pipx/venvs/novelscribe/bin/uvicorn api.app:app --host 0.0.0.0 --port 8000

# If running from source
cd wst/novel_harness
uv run uvicorn api.app:app --host 0.0.0.0 --port 8000

Development

git clone https://github.com/huangjien/wst.git
cd wst
uv sync --all-extras

# Run quality checks
make check

# Individual
make format      # ruff format
make lint        # ruff check --fix
make type-check  # mypy
make test        # pytest with coverage
make build       # uv build → dist/

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

novelscribe-4.6.0.tar.gz (2.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

novelscribe-4.6.0-py3-none-any.whl (2.3 MB view details)

Uploaded Python 3

File details

Details for the file novelscribe-4.6.0.tar.gz.

File metadata

  • Download URL: novelscribe-4.6.0.tar.gz
  • Upload date:
  • Size: 2.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for novelscribe-4.6.0.tar.gz
Algorithm Hash digest
SHA256 04cb1fa42fcdb944017c1230cb5140d83eb55b26322f0a42d8a0c77ebf9f4a4f
MD5 0b9888e39e19b32c15ab5d82022e3833
BLAKE2b-256 86f222b6dbe0c5a16f61d30e8206d325b945861de31a2e7f80234917e695588d

See more details on using hashes here.

File details

Details for the file novelscribe-4.6.0-py3-none-any.whl.

File metadata

  • Download URL: novelscribe-4.6.0-py3-none-any.whl
  • Upload date:
  • Size: 2.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for novelscribe-4.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 44f71dc365e8ed51242184a1553c7e9e0b66087a0037386f129f84029dadbbaa
MD5 2bb02579b0f5231f4893c4b7a71c50c2
BLAKE2b-256 2580064726df28514620497d039025ee26ea82712816412e17503c5162972cf1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page