Deterministic VM for LLM program execution
Project description
🧠 Deterministic VM for LLM program execution.
Turn unpredictable LLM behavior into structured, reproducible workflows.
🧠 Mental Model
- LLM → stateless worker
- Program → declarative workflow (DSL)
- ExecutionVM → deterministic state machine
- Trace → full execution log
«llm-nano-vm is a finite state machine (FSM) for LLM workflows.»
⚠️ The Problem
LLM agents are unpredictable:
- decide next steps dynamically
- may skip critical checks
- behavior varies between runs
✅ The Solution
user_input → Planner (1 LLM call, optional)
→ Program (DSL)
→ ExecutionVM (deterministic)
→ Trace
- Planner = flexible, non-deterministic
- VM = strict, deterministic
🚀 Install
pip install llm-nano-vm
pip install llm-nano-vm[litellm]
⚡ Quick Example (Deterministic Guardrail)
program = Program.from_dict({
"name": "customer_refund",
"steps": [
{
"id": "analyze",
"type": "llm",
"prompt": "Is this a valid refund request?\nRequest: $user_input",
"output_key": "decision",
},
{
"id": "guardrail",
"type": "condition",
"condition": "'yes' in '$decision'.lower()",
"then": "process_refund",
"otherwise": "reject",
},
],
})
👉 Guarantees:
guardrailALWAYS runs- no skipped steps
- deterministic execution path
⚡ Performance
llm-nano-vm is designed for high-throughput AI agent ecosystems. By leveraging Pydantic v2 and an immutable state architecture, the core execution engine introduces near-zero overhead.
Benchmark Results (Android 15 / Termux)
| Metric | Value |
|---|---|
| Throughput (RPS) | ~535 programs/sec |
| Avg. Latency | 1.80 ms per program |
| Complexity | 20 steps (Mix of Tools & Conditions) |
[!TIP] The VM overhead is near-zero. Your bottleneck is the LLM API, not the runtime.
python benchmarks/stress_test.py
🤖 Planner (Optional)
program = await planner.generate("Find latest AI news and summarize")
- exactly 1 LLM call
- outputs DSL program
- not deterministic
📜 Program DSL
{
"id": "step_1",
"type": "llm | tool | condition"
}
Variables
| Syntax | Meaning |
|---|---|
$key |
input context |
$step_id.output |
previous step result |
🔍 Observability (Trace)
trace = await vm.run(program)
trace.status # SUCCESS / FAILED
trace.final_output # last step output
trace.total_tokens() # across all steps
trace.total_cost_usd() # cost in USD
Each step includes: duration · tokens · cost · status
⚖️ llm-nano-vm vs Agents
| LLM Agent | llm-nano-vm | |
|---|---|---|
| Control | LLM decides | You define |
| Determinism | ❌ | ✅ |
| Debugging | hard | full trace |
| Guardrails | weak | enforced |
❌ When NOT to use llm-nano-vm
Do NOT use if:
- workflow is unknown
- task is creative / open-ended
- you need autonomous reasoning
Use it when:
- flow is known
- correctness matters
- reproducibility is required
🔌 Custom Adapter
class MyAdapter:
async def complete(self, messages, **kwargs) -> str:
return "response"
📡 Providers (LiteLLM)
LiteLLMAdapter("groq/llama-3.3-70b-versatile")
LiteLLMAdapter("openrouter/llama-3.3-70b-instruct:free")
LiteLLMAdapter("ollama/llama3")
💼 llm-nano-vm Pro
- 🆓 Core (this repo) — MIT, fully open-source
- 💼 Pro layer — commercial extensions
Planned Pro features:
- 📊 Advanced Trace UI (visual execution graph)
- 🌐 Distributed execution (multi-node VM)
- 🔄 Provider pools & smart routing
- 🔐 Access control & multi-user support
- 📈 Observability (metrics, logs, cost analytics)
🤝 Contact & Support
Author: @ale007xd on Telegram · @ale007xd on X
☕ Support the project
Direct wallet — USDT (TON):
UQCakyytrEGBikOi3eYMpveGHXDB1-fd6lcuQC9VvKqMrI-9
📄 License
This project is licensed under the MIT License.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_nano_vm-0.1.4.tar.gz.
File metadata
- Download URL: llm_nano_vm-0.1.4.tar.gz
- Upload date:
- Size: 244.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4b34f4d3324ff26de80ef4160d2b37d29fb8c7996fcab6bc1699b3e4ed21e9e0
|
|
| MD5 |
48b73f2e5db0b77cd634b9e7be606990
|
|
| BLAKE2b-256 |
7a9eb0e7d613a8a3eb11bce8c70dc04aabd5d24413d83b49a97143223ed3286b
|
File details
Details for the file llm_nano_vm-0.1.4-py3-none-any.whl.
File metadata
- Download URL: llm_nano_vm-0.1.4-py3-none-any.whl
- Upload date:
- Size: 15.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0aba1265e5194d7a0fc739d9062385f222641e55bac6928a688ff5be3af660aa
|
|
| MD5 |
e7b1690d07ae941dccc5b061f454cb8e
|
|
| BLAKE2b-256 |
614024a3cb5be61e9993eac6161a6d13d6eb096f8151721ea2a8586ae590f022
|