Promptise Foundry - Production-ready agentic framework for building advanced AI agent systems and multi-agent orchestration
Project description
Promptise Foundry
The foundation layer for agentic intelligence.
Every other framework gives you an LLM wrapper.
Promptise Foundry gives you the stack behind it.
Website · Documentation · Quick Start · Showcase · Discussions
Agents that survive production need more than a prompt and a tool list.
They need MCP-native tool discovery. A reasoning engine you can shape. Memory you can trust. Guardrails that actually fire. Governance that enforces budgets. A runtime that recovers from crashes. Promptise Foundry ships all of it as one coherent framework — built for engineering teams who are done assembling AI infrastructure from ten half-finished libraries.
Get started in 30 seconds
pip install promptise
import asyncio
from promptise import build_agent, PromptiseSecurityScanner, SemanticCache
from promptise.config import HTTPServerSpec
from promptise.memory import ChromaProvider
async def main():
agent = await build_agent(
model="openai:gpt-5-mini",
servers={
"tools": HTTPServerSpec(url="http://localhost:8000/mcp"),
},
instructions="You are a helpful assistant.",
memory=ChromaProvider(persist_directory="./memory"),
guardrails=PromptiseSecurityScanner.default(),
cache=SemanticCache(),
observe=True,
)
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "What's the status of our pipeline?"}]
})
print(result["messages"][-1].content)
await agent.shutdown()
asyncio.run(main())
Guardrails block injection and redact PII. Semantic cache serves similar queries instantly. Full observability.
Five pillars. One framework.
Each pillar replaces an entire category of libraries you would otherwise assemble yourself.
01🤖 |
AgentTurn any LLM into a production-ready agent with one function call. Replaces: LangChain + a guardrails library + an output validator + a vector-store wrapper + a retry helper.
|
02🧠 |
Reasoning EngineCompose reasoning the way you compose code. Not a black box. Replaces: hand-rolled LangGraph wiring, bespoke planner/executor loops, ReAct-from-scratch.
|
03🔧 |
MCP Server SDKProduction server and native client for the Model Context Protocol. Replaces: rolling your own tool server. What FastAPI is to REST, this is to MCP.
|
04⚡ |
Agent RuntimeThe operating system for autonomous agents. Replaces: Celery + cron + a state store + your own crash recovery + a governance layer. 5 trigger types (cron, webhook, file watch, event, message) · crash recovery via journal replay · 5 rewind modes · 14 lifecycle hooks · budget enforcement with tool costs · health monitoring (stuck, loop, empty, error rate) · mission tracking with LLM-as-judge · secret scoping with TTL and zero-fill revocation · 14 meta-tools for self-modifying agents · 37-endpoint REST API with typed client · live agent inbox · distributed multi-node coordination. |
05✨ |
Prompt EngineeringPrompts built like software. Not strings. Replaces: f-strings + 8 block types with priority-based token budgeting · conversation flows that evolve per phase · 5 composable strategies ( |
Why Promptise Foundry?
Honest comparison. ✅ native · ⚠️ partial or via adapter · ❌ not supported
| Promptise | LangChain | LangGraph | CrewAI | AutoGen | PydanticAI | |
|---|---|---|---|---|---|---|
| MCP-first tool discovery | ✅ Native | ⚠️ via adapter | ⚠️ via adapter | ⚠️ via adapter | ⚠️ via adapter | ⚠️ via adapter |
| Native MCP server SDK (auth · middleware · queue · audit) | ✅ Full | ❌ | ❌ | ❌ | ❌ | ❌ |
| Composable reasoning graph | ✅ 20 nodes · 7 patterns · agent-assembled | ❌ | ✅ Graph-native | ⚠️ Crew/Flow | ⚠️ GroupChat | ❌ |
| Semantic tool optimization (ML selects relevant tools per query) | ✅ 40–70% savings | ❌ | ❌ | ❌ | ❌ | ❌ |
| Local ML security guardrails (prompt-injection · PII · creds · NER · content) | ✅ 6 heads | ❌ external | ❌ external | ❌ | ❌ | ❌ |
| Semantic response cache | ✅ Per-user isolated | ⚠️ Basic (shared) | ⚠️ via LangChain | ❌ | ❌ | ❌ |
| Human-in-the-loop | ✅ 3 handlers + ML classifier | ⚠️ Basic | ✅ interrupt_before/after | ⚠️ human_input=True |
✅ UserProxyAgent | ❌ |
| Sandboxed code execution | ✅ Docker · seccomp · gVisor | ⚠️ PythonREPL | ❌ | ❌ | ✅ Docker executor | ❌ |
| Crash recovery / replay | ✅ 5 rewind modes | ❌ | ✅ Checkpointer | ❌ | ❌ | ❌ |
| Autonomous runtime (triggers · lifecycle · messaging) | ✅ Full OS | ❌ | ⚠️ Persistence only | ❌ | ❌ | ❌ |
| Budget / health / mission governance | ✅ Built-in | ❌ | ❌ | ❌ | ❌ | ❌ |
| Live agent conversation (inbox · ask) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Orchestration REST API | ✅ 37 endpoints + typed client | ❌ | ❌ | ❌ | ❌ | ❌ |
Promptise unifies every row above — one dependency, one type-checked API, one runtime.
Model-agnostic
Any LLM, one string. Or any LangChain BaseChatModel. Or a FallbackChain across providers.
build_agent(model="openai:gpt-5-mini", ...)
build_agent(model="anthropic:claude-sonnet-4-20250514", ...)
build_agent(model="ollama:llama3", ...)
build_agent(model="google:gemini-2.0-flash", ...)
Deploy autonomous agents
Triggers, budgets, health checks, missions, secrets — all in Python.
from promptise.runtime import (
AgentRuntime, ProcessConfig, TriggerConfig,
BudgetConfig, HealthConfig, MissionConfig,
)
async with AgentRuntime() as runtime:
await runtime.add_process("monitor", ProcessConfig(
model="openai:gpt-5-mini",
instructions="Monitor data pipelines. Escalate anomalies.",
triggers=[
TriggerConfig(type="cron", cron_expression="*/5 * * * *"),
TriggerConfig(type="webhook", webhook_path="/alerts"),
],
budget=BudgetConfig(max_tool_calls_per_day=500, on_exceeded="pause"),
health=HealthConfig(detect_loops=True, detect_stuck=True, on_anomaly="escalate"),
mission=MissionConfig(
objective="Keep uptime above 99.9%",
success_criteria="No P1 unresolved for more than 15 minutes",
evaluate_every_n=10,
),
))
await runtime.start_all()
Documentation
| Section | What it covers |
|---|---|
| Quick Start | Your first agent in 5 minutes |
| Key Concepts | Architecture, design principles, the five pillars |
| Building Agents | Step-by-step, simple to production |
| Reasoning Engine | Graphs, nodes, flags, patterns |
| MCP Servers | Production tool servers with auth and middleware |
| Agent Runtime | Autonomous agents with governance |
| Prompt Engineering | Blocks, strategies, flows, guards |
| Showcase | Working patterns, end-to-end |
| API Reference | Every class, method, parameter |
Ecosystem
Promptise plugs into what your team already runs.
Models
+ any LangChain BaseChatModel · FallbackChain for automatic failover
Memory & Vectors
Local embeddings · air-gapped model paths · prompt-injection mitigation built in
Conversation Storage
Session ownership enforced · per-user isolation for cache and guardrails
Observability
8 transporters: OTel · Prometheus · Slack · PagerDuty · Webhook · HTML · JSON · Console
Sandbox & Infrastructure
Docker + seccomp + gVisor + capability dropping · Kubernetes-native health probes
Protocols
stdio · streamable HTTP · SSE · HMAC-chained audit logs
Contributing · Security · License: Apache 2.0
Built by Promptise
Formerly known as DeepMCPAgent — a public preview of one sliver of this framework (MCP-native agent tooling). Promptise Foundry is the full system it was a teaser for: reasoning engine, agent runtime, prompt engineering, sandboxed execution, governance, and observability.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptise-1.0.0.tar.gz.
File metadata
- Download URL: promptise-1.0.0.tar.gz
- Upload date:
- Size: 2.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de7aec62ce84df786530d1f56f265106210a05610a332495a4adf7c8b59c0cba
|
|
| MD5 |
48263298bf3f515720ef598fa283b8c0
|
|
| BLAKE2b-256 |
03b8c87a0e6c1ea86912608d6a7cbb7fe3b84b60f73b1f2be7ac35dcb3da685f
|
File details
Details for the file promptise-1.0.0-py3-none-any.whl.
File metadata
- Download URL: promptise-1.0.0-py3-none-any.whl
- Upload date:
- Size: 601.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9987a6bb8c4dfffd254c33e2a79943cec4d6047361b645744f0a82bc362ed781
|
|
| MD5 |
a9c19d087f56df8d245164c541367c25
|
|
| BLAKE2b-256 |
ab6110be520f69f1aebaa0062ab8480e1b30409857ccdd8ce1a8e3afe9a88624
|