LangGraph Notebook Foundry scaffolding
Project description
langgraph_system_generator
Prompt -> full agentic system. LangGraph System Generator, also called LNF, turns a natural-language request into runnable LangGraph notebook artifacts, exports, and structured QA feedback.
Features
- CLI, API, and web UI: Generate from
lnf, FastAPI, or the browser UI. - Offline-friendly stub mode: Produce deterministic scaffold artifacts without API keys.
- Live generation mode: Use an OpenAI-compatible model for requirements, architecture, graph, tool, and notebook generation.
- Registry-backed planning: Architecture, graph design, tool planning, notebook composition, and QA/repair stages expose structured feedback.
- Portable notebooks: Generated notebooks target local Jupyter and Google Colab.
- Multi-format export: Write IPYNB, HTML, Markdown, DOCX, ZIP, and optional PDF outputs.
Quickstart
-
Create a Python
3.10+virtual environment and install the package:python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -r requirements.txt pip install -e ".[full]"
Install profiles:
pip install -e .installs the core Python package/config/types only.pip install -e ".[api]"installs the FastAPI/web server.pip install -e ".[full]"installs notebook generation, export, and live-mode dependencies.pip install -e ".[full,dev]"installs contributor/test tooling.
-
Copy
.env.exampleto.envand add credentials when you need live mode:cp .env.example .env
Stub mode does not need provider credentials. Live mode requires
OPENAI_API_KEYunless you provide an OpenAI-compatiblecustom_endpointand explicitmodelthrough the API. -
Optionally build the vector index from cached docs:
lnf build-index --cache ./data/cached_docs --store ./data/vector_store
The default index build uses local fake embeddings for offline testing. Add
--use-openaiwhenOPENAI_API_KEYis configured and you want OpenAI-backed semantic retrieval. -
Generate your first system:
lnf generate "Create a router-based customer support chatbot" \ --output ./output/demo \ --mode stub
-
Run the test suite when developing:
python -m pytest --asyncio-mode=auto
How It Works
LNF uses a staged outer LangGraph workflow to turn a prompt into notebook artifacts.
graph LR
Prompt[Prompt] --> Requirements[Requirements]
Requirements --> RAG[RAG]
RAG --> Architecture[Architecture Select]
Architecture --> Plan[Plan]
Plan --> Generate[Generate]
Generate --> QA[QA / Repair]
QA --> Export[Export]
Pipeline stages:
- Prompt: The CLI, API, or web UI collects request options.
- Requirements:
RequirementsAnalystextracts typed constraints plus advisory feedback. - RAG:
DocsRetrieverprovides cached LangChain/LangGraph context. - Architecture:
ArchitectureSelectorchoosesrouter,subagents,hybrid, orautoagent; explicit opt-in requests can select the experimentaldeepagentsarchitecture. - Plan:
GraphDesignerandToolchainEngineerturn the selected architecture into a typed workflow design, graph exports, tool plan, and planning feedback. - Generate:
NotebookComposerbuilds cells, a dependency plan, fallback feedback, and a graph overview section. - QA / Repair: Static/runtime QA validate the notebook; deterministic registry-backed repair runs only when needed and records rollback/no-op outcomes.
- Export: The CLI/API export layer writes notebook artifacts and a manifest with structured feedback and warnings.
The same pipeline powers all three entry points:
- CLI for local generation and index building.
- FastAPI + web UI for browser-based generation and downloads.
- Python package imports for reuse in scripts and tests.
For stage-by-stage state details, fallback behavior, and repair-loop semantics, see docs/wiki/Architecture-Deep-Dive.md. For developer-focused onboarding and extension notes, see docs/wiki/Developer-Onboarding.md. For runnable and text-only workflow examples, see examples/cross-cutting-workflows.md. For maintainer-focused repository visualizations, including the generator stage/state map and generated package/module/env snapshots, see docs/diagrams/README.md.
CLI
Generate artifacts:
# Stub mode, no API key required
lnf generate "Create a router-based chatbot" --output ./output/demo --mode stub
# Force an architecture
lnf generate "Create an autonomous planning assistant" \
--mode stub \
--agent-type autoagent
# Opt into the experimental Deep Agents architecture
lnf generate "Create a Deep Agents research assistant" \
--mode stub \
--agent-type deepagents
# Select output formats. Default: ipynb html markdown docx zip
lnf generate "Create a chatbot" \
--output ./output/demo \
--formats ipynb html markdown docx zip
# Increase CLI verbosity for debugging/tracing fallback/error behavior
lnf --log-level DEBUG generate "Create a router-based chatbot" \
--output ./output/debug
Pass --mode live to lnf generate when you have OPENAI_API_KEY configured and want to invoke the full generator graph.
For API and CLI default verbosity, set LNF_LOG_LEVEL (or LOG_LEVEL) to one of:
TRACE, DEBUG, INFO, WARNING, ERROR, CRITICAL.
Build the docs index:
# Offline test index
lnf build-index
# OpenAI-backed semantic index
lnf build-index --use-openai
CLI options intentionally stay narrow. Use the API for request-scoped model,
temperature, max_tokens, or custom_endpoint overrides.
Expected Outputs And Feedback
Successful generations can include:
manifest.json: Generation metadata, structured feedback, warnings, and per-format export status.notebook_plan.json: Notebook planning metadata.generated_cells.json: Raw cell specifications.notebook.ipynb: Runnable Jupyter/Colab notebook.notebook.html: HTML export.notebook.md: Markdown export.notebook.docx: Word document export.notebook.pdf: Optional PDF export.notebook_bundle.zip: Bundle with the notebook, requested exports, and JSON artifacts.
The manifest includes advisory fields such as requirements_feedback,
architecture_feedback, graph_design_feedback, graph_exports,
tool_planning_feedback, notebook_composition_feedback,
notebook_dependency_plan, and qa_repair_feedback. These are response/output
fields, not new request fields.
Use manifest.json as the primary summary for:
- the selected architecture and generation mode
- export success or failure per artifact
- warning surfaces and fallback paths
- repair attempt history and next-step hints
- artifact paths that can be downloaded through
GET /artifacts
API And Web UI
Start the FastAPI server:
uvicorn langgraph_system_generator.api.server:app --host 0.0.0.0 --port 8000
Open http://localhost:8000 for the web UI.
REST endpoints:
GET /: Web interface.GET /health: Health check.POST /generate: Synchronous generation.POST /generate-async: Start an async generation job.GET /stream/{job_id}: Server-Sent Events progress stream. SupportsLast-Event-IDreplay.GET /artifacts?path=...: Download a generated artifact path listed in the manifest.
Example:
curl -X POST http://localhost:8000/generate \
-H "Content-Type: application/json" \
-d '{
"prompt": "Create a customer support chatbot with routing",
"mode": "stub",
"output_dir": "./output/my_system",
"formats": ["ipynb", "html", "markdown", "docx", "zip"]
}'
Request fields are prompt, mode, output_dir, formats, model,
custom_endpoint, temperature, max_tokens, and agent_type.
Current API request model snapshot:
classDiagram
class GenerationRequest {
prompt : Optional[str]
mode : Optional[GenerationMode]
output_dir : Optional[str]
formats : Optional[list[str]]
model : Optional[str]
custom_endpoint : Optional[str]
temperature : Optional[float]
max_tokens : Optional[int]
agent_type : Optional[str]
}
Colab Usage
Generated notebooks are intended to run in local Jupyter and Google Colab.
- Generate or download
notebook.ipynb. - Upload it to Google Drive and open it in Colab.
- Run the generated setup/install cell. It is built from the notebook dependency plan, so it only installs the packages the notebook needs.
- Configure only the provider credentials referenced by the generated
notebook, usually
OPENAI_API_KEY. - Run the notebook top-to-bottom. Use
--mode stubwhen you want an offline-friendly scaffold.
For details, see docs/wiki/Colab-Usage.md.
Pattern Library
The generator-backed core patterns are:
RouterPattern: Dynamic routing to specialized handlers.SubagentsPattern: Supervisor-based coordination of specialist workers.HybridPattern: Router plus worker/team composition.AutoAgentPattern: Planner/executor/critic-style autonomous workflow.DeepAgentsPattern: Experimental optional Deep Agents SDK harness using lazycreate_deep_agent(...)imports and deterministic offline fallback.CritiqueLoopPattern: Iterative generation, critique, and revision.
See docs/patterns.md, docs/wiki/Pattern-Library-Guide.md, and the runnable examples under examples/.
Configuration
Common environment variables:
OPENAI_API_KEY: OpenAI-compatible live-mode credentials.ANTHROPIC_API_KEY: Optional provider credential for generated notebooks that use Anthropic-backed tools.LANGSMITH_API_KEYandLANGSMITH_PROJECT: Optional tracing.VECTOR_STORE_TYPEandVECTOR_STORE_PATH: Retrieval index configuration.DEFAULT_MODEL: Default live model, currentlygpt-5-mini.MAX_REPAIR_ATTEMPTS: Bounded QA repair loop count.LNF_OUTPUT_BASE: Constrains production-facing output paths.LNF_MAX_CONCURRENT_GENERATIONS: Async API generation concurrency.
Internal extension hooks accept JSON arrays or comma-separated module names:
GRAPH_DESIGNER_PLUGIN_MODULESNOTEBOOK_COMPOSER_PLUGIN_MODULESTOOLCHAIN_ENGINEER_PLUGIN_MODULESQA_REPAIR_PLUGIN_MODULES
These hooks are internal-first extension points; they do not add public CLI/API request fields.
Extension Points
The generator keeps runtime extension hooks behind environment variables so the public CLI/API contract stays stable:
| Surface | Environment variable | Expected registration function |
|---|---|---|
| Graph design | GRAPH_DESIGNER_PLUGIN_MODULES |
register_graph_designers(registry) |
| Notebook composition | NOTEBOOK_COMPOSER_PLUGIN_MODULES |
register_notebook_composer_builders(registry) |
| Tool planning | TOOLCHAIN_ENGINEER_PLUGIN_MODULES |
register_toolchain_tools(registry) |
| QA / repair | QA_REPAIR_PLUGIN_MODULES |
register_qa_repair_plugins(registry) |
Each value can be a JSON array or a comma-separated list of dotted module
paths. These hooks extend internal registries; they do not create new request
fields or change the top-level lnf generate flags.
Logging And Tracing
Use the built-in logging helpers for local debugging and CI runs:
LNF_LOG_LEVELorLOG_LEVELsets the default log level.lnf ... --log-level DEBUGoverrides logging for CLI runs.- FastAPI uses the same shared logging configuration during server startup.
GET /stream/{job_id}exposes progress events for async API generations.
For trace collection in live LangChain/LangGraph runs, set
LANGSMITH_API_KEY and LANGSMITH_PROJECT. The current LangSmith guidance for
LangGraph tracing is documented at
Trace LangGraph applications.
Developing Locally
Useful local commands:
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -e ".[full,dev]"
python -m pytest tests/unit --asyncio-mode=auto -q
python -m pytest --asyncio-mode=auto
black src/ tests/
ruff check src/ tests/
mypy src/
Release-readiness checks:
# Local deterministic LangGraph release evaluation, no LangSmith upload
python scripts/run_release_eval.py --no-upload
# Isolated install matrix smoke tests for the documented package extras
RUN_PACKAGING_SMOKE=1 PACKAGING_SMOKE_SCENARIOS=minimal,api python -m pytest tests/integration/test_packaging_install_smoke.py -q
RUN_PACKAGING_SMOKE=1 PACKAGING_SMOKE_SCENARIOS=full python -m pytest tests/integration/test_packaging_install_smoke.py -q
When editing docs only, the narrowest useful validation is:
python -m pytest tests/unit/test_documentation_coverage.py -q
Use stub mode for the fastest local verification loop:
lnf generate "Create a router-based customer support chatbot" \
--output ./output/docs-smoke \
--mode stub
More docs:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_system_generator-1.0.0.tar.gz.
File metadata
- Download URL: langgraph_system_generator-1.0.0.tar.gz
- Upload date:
- Size: 160.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
99c7eeff9ca9f143f5c2c91dc216a12ce1c2a6850860283324bcdcbb808b52ad
|
|
| MD5 |
e0003e2adbbe5b29607259b4afc9c573
|
|
| BLAKE2b-256 |
cbea8e35dbf47d0cbf8f91c1a1b41d6201a1b628379cc0cefef34ec4197ed9a0
|
Provenance
The following attestation bundles were made for langgraph_system_generator-1.0.0.tar.gz:
Publisher:
python-publish.yml on dhar174/langgraph_system_generator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langgraph_system_generator-1.0.0.tar.gz -
Subject digest:
99c7eeff9ca9f143f5c2c91dc216a12ce1c2a6850860283324bcdcbb808b52ad - Sigstore transparency entry: 1462634821
- Sigstore integration time:
-
Permalink:
dhar174/langgraph_system_generator@50d58ecdbb142dc660d1d353ed14061c5247b456 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/dhar174
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@50d58ecdbb142dc660d1d353ed14061c5247b456 -
Trigger Event:
release
-
Statement type:
File details
Details for the file langgraph_system_generator-1.0.0-py3-none-any.whl.
File metadata
- Download URL: langgraph_system_generator-1.0.0-py3-none-any.whl
- Upload date:
- Size: 188.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8da13bc04ed5270c24cd99aa10d5ca8a75dd8f8fc9388a3e5a5c0f5271e2a98d
|
|
| MD5 |
289bd0548c8813da5fef3b1d84a47549
|
|
| BLAKE2b-256 |
715efc7a632f3d4e4e6147743ebd32d78f059d8ca33294ba5cddf66523ff90ce
|
Provenance
The following attestation bundles were made for langgraph_system_generator-1.0.0-py3-none-any.whl:
Publisher:
python-publish.yml on dhar174/langgraph_system_generator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langgraph_system_generator-1.0.0-py3-none-any.whl -
Subject digest:
8da13bc04ed5270c24cd99aa10d5ca8a75dd8f8fc9388a3e5a5c0f5271e2a98d - Sigstore transparency entry: 1462635011
- Sigstore integration time:
-
Permalink:
dhar174/langgraph_system_generator@50d58ecdbb142dc660d1d353ed14061c5247b456 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/dhar174
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@50d58ecdbb142dc660d1d353ed14061c5247b456 -
Trigger Event:
release
-
Statement type: