CLI scaffolder for LangGraph projects
Project description
langgraph-init-cli
A production-oriented scaffolding CLI for LangGraph projects. Generate opinionated project layouts that start simple and scale into a full agentic architecture — with graph orchestration, prompt versioning, tool registries, evaluation, observability, and LangSmith-ready tracing.
pip install langgraph-init-cli
langgraph-init my-app --template production
Why langgraph-init?
Starting a LangGraph project from scratch means a lot of boilerplate: wiring state graphs, organizing nodes, managing prompts, hooking up LangSmith. langgraph-init gives you a well-structured starting point so you can skip the scaffolding and focus on the agent logic that matters.
Installation
pip install langgraph-init-cli
Or for a cleaner global CLI experience:
pipx install langgraph-init-cli
Usage
langgraph-init <project-name> --template <template>
Available templates:
| Template | Best For |
|---|---|
base |
Quick experiments and prototypes |
advanced |
Modular team projects with clean graph structure |
production |
Full production systems with tooling, observability, and evaluation |
Example:
langgraph-init my-app --template production
cd my-app
pip install -e .
python -m src.app.main
Templates
base
A minimal LangGraph starter to get something running fast.
- Small
StateGraphwith typed state - A few nodes wired together
- Runnable via
src.app.main:run
advanced
A modular architecture for teams that want clean structure without full production overhead.
graph/package withbuilder.py,state.py,edges.py,constants.py,registry.pyNames/Nodes/Tagspattern for readable graph wiringnodes/,services/,prompts/, andutils/directories- Prompt loading abstraction and conditional edges
production
A complete scaffold intended for real systems. Everything is modular, runnable without external services, and ready to extend.
- Graph orchestration with
StateGraph, conditional routing, and retry logic RunnableParallelexample for parallel enrichment- Prompt versioning system (
prompts/versions/<task>/v1.txt,v2.txt, ...) - Tool framework with
BaseTool, registry, and example tools (calculator, retriever, HTTP-style) - Evaluation, confidence scoring, and field coverage checks
- Structured JSON logging, metrics counters, trace decorators
- LangSmith integration (environment-driven, safe when disabled)
- Optional API entry surface and storage layer abstractions
Generated layout:
src/app/
├── main.py
├── config.py
├── graph/
├── nodes/
├── services/
├── tools/
├── prompts/
├── models/
├── storage/
├── utils/
├── observability/
├── evaluation/
└── api/
Understanding the Generated Files
Here's what each file and folder does, and where to make changes when building your own agent.
config.py — App Settings
Centralizes all configuration in a single Settings dataclass. Every value reads from environment variables with sensible defaults, so you control behavior via a .env file without touching code.
# What's inside:
project_name # Your app name
environment # "development" | "production" (from APP_ENV)
log_level # "INFO" by default (from LOG_LEVEL)
langsmith_tracing # Toggle LangSmith tracing on/off (from LANGSMITH_TRACING)
langsmith_api_key # Your LangSmith API key (from LANGSMITH_API_KEY)
default_prompt_versions # e.g. {"intent": "v2", "extraction": "v1"}
max_validation_retries # How many times to retry failed validation (default: 2)
What to change here: your project name, default prompt versions, and retry limits. Add any new env-driven settings your agent needs.
main.py — Entrypoint
Builds the graph, creates a sample input state, runs it with app.invoke(), and prints the output. This is what executes when you run python -m src.app.main.
def run():
app = build_graph()
sample_state = {
"input_text": "Please calculate 12 + 7 and validate the answer.",
"retry_count": 0,
"messages": [],
"prompt_versions": settings.default_prompt_versions.copy(),
"tool_results": {},
}
result = app.invoke(sample_state)
print(result.get("output", "No output produced."))
What to change here: replace sample_state with your real input schema. Swap the input_text for whatever your agent actually receives (a user message, a document, an API payload, etc.).
graph/ — Graph Orchestration
Owns the entire workflow definition. The key files are:
| File | Purpose |
|---|---|
builder.py |
Wires nodes and edges together into a StateGraph — start here to understand the flow |
state.py |
Defines the typed state that flows between nodes |
edges.py |
Conditional routing logic (e.g. retry vs continue vs error) |
constants.py |
Names, Nodes, and Tags classes for stable identifiers |
registry.py |
Maps node names to their callable implementations |
What to change here: add new nodes in registry.py, define new routes in edges.py, and expand the state schema in state.py.
nodes/ — Node Implementations
Each file handles one step in the workflow:
| File | What it does |
|---|---|
intent.py |
Classifies the incoming input |
processing.py |
Runs extraction and tool-backed enrichment |
validation.py |
Scores output quality and decides whether to retry |
output.py |
Builds and persists the final result |
error.py |
Produces a structured failure response |
What to change here: this is where most of your agent logic lives. Replace the stub implementations with real LLM calls, business logic, or tool invocations.
services/ — Reusable Logic
Holds logic that multiple nodes share, so it doesn't get duplicated:
| File | What it does |
|---|---|
llm_service.py |
LLM orchestration and parallel enrichment via RunnableParallel |
prompt_service.py |
Loads prompts by task/version with fallback to v1 |
evaluation_service.py |
Coordinates evaluation scoring |
versioning_service.py |
Switches prompt versions at runtime |
tool_service.py |
Dispatches tool calls from the registry |
What to change here: swap the stub LLM calls in llm_service.py with your actual provider client (OpenAI, Anthropic, Gemini, etc.).
tools/ — Tool Framework
A small but extensible tool system with a BaseTool contract and registry. Comes with demo tools (calculator, retriever, HTTP-style query) you can run immediately and replace later.
What to change here: add your own tools by implementing BaseTool and registering them. Delete the demo tools when you no longer need them.
prompts/ — Prompt Versioning
Prompts are plain .txt files organized by task and version:
prompts/versions/
├── intent/
│ ├── v1.txt
│ └── v2.txt
├── extraction/
│ └── v1.txt
└── validation/
└── v1.txt
The prompt service loads the version specified in config.py and falls back to v1 if a version doesn't exist. This lets you iterate on prompts without changing code — just add a new version file and update the config.
What to change here: edit the .txt files directly. Add new task folders as your agent gains new capabilities.
observability/ — Logging and Metrics
Structured JSON logging and metrics counters out of the box. Trace decorators make it easy to instrument any function for LangSmith.
What to change here: add custom metrics or extend the trace decorators for new nodes.
evaluation/ — Output Scoring
Field coverage evaluation and confidence scoring to measure output quality during development. Feeds into the validation node's retry decision.
What to change here: expand the scoring rubric to match your domain's definition of a good output.
storage/ — Persistence Layer
Abstract storage interfaces with stub implementations. Designed to be swapped for a real database (Postgres, Redis, S3, etc.) without changing node code.
What to change here: implement the storage interface for your target database when you're ready to persist results.
api/ — Optional API Surface
A thin API layer (FastAPI-ready) you can enable if you want to expose the agent as an HTTP service.
What to change here: add your routes and request/response models here when you're ready to serve the agent over HTTP.
Graph Wiring Pattern
The advanced and production templates use a three-concept pattern to keep graph wiring readable and maintainable:
class Names:
INTENT = "intent" # Stable node identifiers
class Nodes:
INTENT = intent_node # Callable implementations
class Tags:
CONTINUE = "continue" # Routing labels for conditional edges
This separates string identifiers from executable functions, making edges easy to read and refactor.
After Generation
Recommended next steps once you have your scaffold:
- Replace deterministic LLM stubs with your actual provider client
- Add real persistence in
storage/ - Swap demo tools for domain-specific tools
- Expand evaluation metrics for your use case
- Add tests around graph routing and node behavior
- Set
LANGCHAIN_API_KEYand related env vars to activate LangSmith tracing
Generated langgraph.json
All projects include:
{
"dependencies": ["."],
"graphs": {
"<project-name>": "src.app.main:run"
}
}
This keeps the entrypoint consistent and compatible with LangGraph tooling.
Links
- PyPI: https://pypi.org/project/langgraph-init-cli/
- LangGraph docs: https://langchain-ai.github.io/langgraph/
- LangSmith: https://smith.langchain.com/
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_init_cli-0.1.3.tar.gz.
File metadata
- Download URL: langgraph_init_cli-0.1.3.tar.gz
- Upload date:
- Size: 60.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3aca2d941a802a8219b64be0cbdc53d2c98212b9ba165d8accd266526dfeb237
|
|
| MD5 |
9d84393cec0be4b81c8291ce9f7db4f8
|
|
| BLAKE2b-256 |
8e45328bf74e3a7800b40ee9a62bde12bb00ffaee78bd78df688e9d48ca9df83
|
File details
Details for the file langgraph_init_cli-0.1.3-py3-none-any.whl.
File metadata
- Download URL: langgraph_init_cli-0.1.3-py3-none-any.whl
- Upload date:
- Size: 114.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb3a88a6491dc7ae30be041f178201a0c53ff8e73efd0388868dc6f0b022a604
|
|
| MD5 |
2e4575ee8404aecf964e162c79e6a735
|
|
| BLAKE2b-256 |
1979cff8bc013d2a6561acdc6a85596eea19326a60d6c283a38f3fe5d7f263f5
|