Mistral-native agent orchestration framework — EU sovereign, GDPR built-in
Project description
Tramontane
The only agent framework with state-of-the-art memory, typed skills, and intelligent model routing.
Mistral-native agent orchestration with 3-tier memory, composable skills, 4-channel retrieval, and GDPR compliance. Built in Orleans, France.
pip install tramontane
Why Tramontane?
| Feature | CrewAI | LangGraph | OpenClaw | Tramontane |
|---|---|---|---|---|
| Role-based agents | Yes | No | No | Yes |
| 3-tier memory (working+factual+experiential) | No | No | Basic | Yes |
| Agent-controlled memory tools | No | No | No | Yes |
| 4-channel retrieval + RRF fusion | No | No | No | Yes |
| Typed skills with profiling | No | No | .md only | Yes |
| Skill composition (pipelines) | No | No | Lobster | Yes |
| Tool calling (native functions) | Yes | Yes | Yes | Yes |
| Structured output (Pydantic) | Yes | No | No | Yes |
| Reasoning effort control | No | No | No | Yes |
| Progressive reasoning | No | No | No | Yes |
| Model cascading | No | No | No | Yes |
| Self-learning router | No | No | No | Yes |
| FleetTuner (auto-optimize) | No | No | No | Yes |
| Parallel execution | Yes | Yes | No | Yes |
| Knowledge bases (RAG) | Yes | Yes | No | Yes |
| Voice pipelines (TTS/STT) | No | No | No | Yes |
| Cost simulation (dry run) | No | No | No | Yes |
| EUR cost tracking | No | No | No | Yes |
| GDPR middleware | No | No | No | Yes |
| MCP tool export | No | No | No | Yes |
Quick Start
import asyncio
from tramontane import Agent, MistralRouter
agent = Agent(
role="Analyst",
goal="Analyze market trends",
backstory="Senior market analyst",
model="auto",
budget_eur=0.01,
)
async def main():
router = MistralRouter()
result = await agent.run("Analyze the EU AI market", router=router)
print(f"Model: {result.model_used}, Cost: EUR {result.cost_eur:.4f}")
print(result.output)
asyncio.run(main())
Memory
3-tier memory: working (always in context), factual (knowledge graph), experiential (self-improvement).
from tramontane import Agent, TramontaneMemory
memory = TramontaneMemory(db_path="memory.db")
agent = Agent(
role="Gerald",
goal="Remember everything about clients",
backstory="Autonomous business agent",
tramontane_memory=memory,
memory_tools=True, # Gets retain/recall/reflect/forget/update tools
auto_extract_facts=True, # Auto-extracts facts after every run
working_memory_blocks=["Goals", "User"],
)
The agent can call retain_memory("Acme Corp prefers React"), recall_memory("What does Acme prefer?"), reflect_on_memory("What patterns have I seen?"), forget_memory(id, "GDPR request"), and update_memory(id, "new info") during execution.
4-channel retrieval: semantic (cosine similarity on mistral-embed vectors) + keyword (FTS5 BM25) + entity (graph traversal) + temporal (recency + frequency). Results fused via Reciprocal Rank Fusion (k=60).
Skills
Typed, composable, learnable capabilities with profiling and security.
from tramontane import Skill, SkillResult, SkillRegistry, track_skill
class LeadQualifier(Skill):
name = "lead_qualifier"
description = "Score B2B leads against ICP"
triggers = ["qualify", "score lead"]
preferred_model = "ministral-3b-latest"
@track_skill # Auto-logs timing, cost, success/failure
async def execute(self, input_text, context=None):
from tramontane import Agent
agent = Agent(role="Qualifier", goal="Score leads", backstory="Sales expert",
model=self.preferred_model)
result = await agent.run(input_text)
return SkillResult(output=result.output, success=True, cost_eur=result.cost_eur)
registry = SkillRegistry()
registry.register(LeadQualifier()) # SHA-256 hash + security scan
matches = registry.search("qualify this lead")
Includes 5 built-in skills: TextAnalysis, CodeGeneration, EmailDraft, DataExtraction, WebSearch. Supports Python, YAML, and SKILL.md formats. Compose with SkillPipeline, ConditionalSkill, ParallelSkills.
Tool Calling
async def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
agent = Agent(
role="Researcher",
goal="Find information",
backstory="Expert researcher",
tools=[search_web],
tool_choice="auto", # "auto" | "none" | "any" | "required"
parallel_tool_calls=True,
max_iter=5,
)
result = await agent.run("Research Mistral AI", router=router)
print(result.tool_calls)
Structured Output
from pydantic import BaseModel
class Analysis(BaseModel):
summary: str
score: int
recommendations: list[str]
agent = Agent(role="Analyst", goal="Analyze", backstory="Expert",
output_schema=Analysis)
result = await agent.run("Analyze this market")
analysis: Analysis = result.parsed_output # Validated Pydantic model
Smart Fleet
Reasoning Effort
agent = Agent(model="mistral-small-4", reasoning_effort="high") # none | medium | high
Progressive Reasoning
agent = Agent(model="mistral-small-4", reasoning_strategy="progressive",
validate_output=lambda r: "conclusion" in r.output)
# Tries none -> medium -> high, stops at first success
Model Cascading
agent = Agent(model="devstral-small",
cascade=["devstral-2", "mistral-large-3"],
validate_output=lambda r: len(r.output) > 1000)
FleetTuner
from tramontane import FleetTuner
tuner = FleetTuner()
result = await tuner.tune(agent, ["prompt1", "prompt2"], optimize_for="balanced")
optimized = result.apply(agent)
Self-Learning Router
from tramontane import MistralRouter, FleetTelemetry
router = MistralRouter(telemetry=FleetTelemetry())
# After 50+ decisions, routes by YOUR production data
Fleet Profiles
from tramontane import FleetProfile
agent = Agent(fleet_profile=FleetProfile.BUDGET) # BUDGET | BALANCED | QUALITY | UNIFIED
Parallel Execution
from tramontane import ParallelGroup
group = ParallelGroup([designer, architect])
result = await group.run(input_text="Design a website")
print(result.get("Designer").output)
print(f"Total: EUR {result.total_cost_eur:.4f}")
Knowledge Bases (RAG)
from tramontane import KnowledgeBase
kb = KnowledgeBase(db_path="knowledge.db")
await kb.ingest(sources=["docs/*.md"])
agent = Agent(role="Support", goal="Help", backstory="Expert", knowledge=kb, knowledge_top_k=5)
Pipeline YAML
name: Lead Gen
budget_eur: 0.01
agents:
researcher:
role: Researcher
model: mistral-small-4
writer:
role: Writer
model: devstral-small
temperature: 0.8
flow: [researcher, writer]
tramontane run pipeline.yaml --input "Research Scaleway"
Voice Pipelines
from tramontane import VoicePipeline
vpipe = VoicePipeline(agent=my_agent, enable_tts=True)
result = await vpipe.run(text_input="Brief me on today's leads")
# result.audio_bytes = spoken response via Voxtral TTS
Streaming
async for event in agent.run_stream("Generate a report",
on_pattern={r"## (?P<section>.+)": on_section_found}):
if event.type == "token":
print(event.token, end="", flush=True)
elif event.type == "tool_call":
print(f"\n[Calling {event.tool_name}]")
GDPR
agent = Agent(role="Processor", goal="Process data", backstory="Expert",
gdpr_level="strict", audit_actions=True)
# Built-in PII detection, Article 17 erasure, Article 30 reports
The Mistral Fleet
| Model | Best For | EUR/1M in/out | Reasoning | Vision |
|---|---|---|---|---|
| ministral-3b | Classification, triage | 0.04/0.04 | ||
| ministral-7b | Bulk, extraction | 0.10/0.10 | ||
| mistral-small | General, multilingual | 0.10/0.30 | ||
| mistral-small-4 | General+reasoning+vision | 0.15/0.60 | Yes | Yes |
| devstral-small | Code generation | 0.10/0.30 | ||
| devstral-2 | Complex SWE | 0.50/1.50 | ||
| magistral-small | Reasoning, planning | 0.50/1.50 | ||
| magistral-medium | Deep reasoning | 2.00/5.00 | ||
| mistral-large | Frontier synthesis | 2.00/6.00 | ||
| mistral-large-3 | Frontier (Apache 2.0) | 2.00/6.00 | ||
| pixtral-large | Vision, OCR | 2.00/6.00 | ||
| voxtral-mini | Transcription | 0.04/0.04 | ||
| voxtral-tts | Text-to-speech | 0.016/char |
CLI
tramontane models # Fleet with pricing + capabilities
tramontane doctor # Health check + API connectivity
tramontane fleet # Fleet stats from telemetry
tramontane simulate pipeline.yaml # Cost estimate without API calls
tramontane knowledge ingest docs/ # Build knowledge base
tramontane knowledge search "query" # Search knowledge base
tramontane telemetry stats # Router learning metrics
Built With Tramontane
- ArkhosAI — EU answer to Lovable. 4-agent website generator, EUR 0.004/generation.
- Gerald — Autonomous business intelligence agent with memory + skills.
Install
pip install tramontane # Core
pip install tramontane[redis] # Redis memory backend
pip install tramontane[postgres] # PostgreSQL + pgvector
pip install tramontane[voice] # Voice gateway
pip install tramontane[sandbox] # E2B code sandbox
Links
License
MIT — Bleucommerce SAS, Orleans, France
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tramontane-0.2.4.tar.gz.
File metadata
- Download URL: tramontane-0.2.4.tar.gz
- Upload date:
- Size: 19.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0536228de82f30194b80455a6fc5777248c986f44022208818ade720b03021d5
|
|
| MD5 |
3b6e66d79e9296ee2b5ca34b2024fdcb
|
|
| BLAKE2b-256 |
f8a94835bd6da4b299b1f609191247eb57fa73e03d0e78024425cb9691a0535e
|
File details
Details for the file tramontane-0.2.4-py3-none-any.whl.
File metadata
- Download URL: tramontane-0.2.4-py3-none-any.whl
- Upload date:
- Size: 133.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74cf805ea978718802a43da305ebdffcf03ef7525da716c3d32886a0ed81c7c8
|
|
| MD5 |
24c0eadaa3573f6dba5355965aa21830
|
|
| BLAKE2b-256 |
64b256776b8547bcd797272d9a3db76db9e23a509654c80628f95224cac15d0d
|