Skip to main content

Fan-Out/Fan-In agentic orchestration framework with multi-provider LLM support and MCP tool integration.

Project description

Cortex Agent Framework

Cortex Agent Framework

One YAML file. One method call. A production AI agent.
Stop rebuilding the same agent plumbing. Define it once, deploy anywhere, let it learn.

PyPI version License: MIT Python 3.11+ 8 LLM providers MCP native

OverviewArchitectureFeaturesGetting StartedUse CasesConfigCLIDeploymentFAQ


The problem

Every AI team builds the same stack: task decomposition, parallel tool execution, streaming, retries, session management, quality scoring, multi-provider routing, deployment. Most teams rebuild it two or three times before shipping.

Cortex is that stack. Pre-built. Battle-tested. Driven by config, not code.


3 commands. A running agent.

pip install cortex-agent-framework
cortex setup            # visual wizard at localhost:7799
cortex publish ui       # chat UI at localhost:8090

You now have a working agent with a professional web interface, file upload support, streaming responses, and persistent chat history. No frontend to build. No backend to wire. No infrastructure to manage.


Define your agent in YAML. Run it in Python.

agent:
  name: ResearchAgent
  description: Searches the web and writes reports

llm_access:
  default:
    provider: anthropic
    model: claude-sonnet-4-5
    api_key_env_var: ANTHROPIC_API_KEY

task_types:
  - name: web_research
    capability_hint: web_search
    output_format: md
  - name: write_report
    capability_hint: document_generation
    depends_on: [web_research]
from cortex.framework import CortexFramework

framework = CortexFramework("cortex.yaml")
await framework.initialize()

result = await framework.run_session(
    user_id="user_1",
    request="Research the latest vector DB benchmarks and write a report",
)
print(result.response)

Fan-out, tool calls, dependency resolution, synthesis, validation — all handled.


Why teams choose Cortex

Skip months of framework engineering

The orchestration, parallelism, tool integration, streaming, retries, validation, session persistence, and deployment pipeline — it's all in the box. Your Python code stays a thin wrapper. The agent's behavior lives in cortex.yaml, versioned, diffable, reviewable.

Multi-agent composition for free

Any Cortex agent becomes an MCP server in one command. Other agents consume it as a tool. That's the entire inter-agent protocol — standard MCP, nothing custom.

Orchestrator → Research Agent (MCP :8081) → brave-search, wikipedia
             → Code Review Agent (MCP :8082) → github, filesystem
             → Writing Agent (MCP :8083) → document-gen

Each agent scales, deploys, and configures independently. Compose them by adding YAML lines.

Built-in quality gates

Every response passes through a Validation Agent that scores intent match, completeness, and coherence. You set a floor (default: 0.75); responses below it are flagged and remediated. Bad intermediate outputs are retried with feedback before the pipeline moves on.

Catch regressions before users do, not after.

Your agent gets smarter over time

The Learning Engine observes task patterns across sessions. When patterns recur, it stages a delta proposal — a concrete config change you review with cortex delta review and apply in one command. The agent doesn't silently drift; it surfaces what it learned and asks for approval.


What's in the box

8 LLM providers Anthropic, OpenAI, Gemini, Grok, Mistral, DeepSeek, AWS Bedrock, Azure AI — swap with one YAML line
Fan-out / fan-in LLM-generated DAG with parallel execution; independent tasks run simultaneously
MCP-native tools First-class SSE, stdio, and streamable-HTTP MCP tool servers
Multi-agent mesh Publish any agent as an MCP server — compose specialist agents into an orchestrator
Identity & delegation First-class Principal model — human, system, or agent-to-agent calls with full delegation chains in audit logs
Quality validation Every response scored and gated; per-task validation inside the execution loop
Delta learning Agent proposes improvements; human-in-the-loop review before apply
Blueprints Reusable workflow knowledge loaded into context, auto-updated with consent
Streaming Typed event classes (StatusEvent, ResultEvent, ClarificationEvent) for any UI
Per-task LLM routing Route decomposition to a fast model, synthesis to flagship
Session persistence Memory / SQLite / Redis with WAL replay and resumable sessions
Built-in chat UI Web frontend with file uploads, streaming, and conversation history
4 deploy targets publish docker, publish package, publish mcp, publish ui
Visual setup wizard Configure everything from a browser — cortex setup
Security Input sanitisation, credential scrubbing, sandboxed code execution, MCP output guard
Observability OpenTelemetry, audit logs, anomaly detection, token budgets

How Cortex compares

Capability Cortex Typical agent frameworks
Configuration Single cortex.yaml drives everything Scattered code, env vars, multiple config files
Task orchestration LLM-generated DAG with parallel fan-out/fan-in Sequential chain or hand-coded state machine
Tool protocol Native MCP (SSE, stdio, streamable-HTTP) Custom tool wrappers per integration
Multi-agent Any agent becomes an MCP tool in one command Bespoke inter-agent protocols
Quality gates Built-in validation with scoring + remediation Manual testing or nothing
Learning Delta proposals + blueprints with human review Prompt tweaking by hand
LLM providers 8 built-in, swap via config Usually 1-2, hard-coded
Deployment 4 targets, one command each Write your own Dockerfile

Who is Cortex for?

You are... Cortex gives you...
Startup founder shipping an AI product A production agent runtime in an afternoon — skip 3-6 months of plumbing
Platform team at a larger company A governed agent runtime with audit trails, quality gates, and per-user isolation
Enterprise architect Multi-agent meshes with independent scaling and compliance-friendly history encryption
Solo developer Prototype to production with one YAML file
Researcher Swap providers, models, and tools from config — run experiments without touching code
MLOps engineer Validation scores, session replay, token accounting, and OpenTelemetry out of the box

What Cortex is not

  • Not a low-code builder. It's a Python library. The config replaces boilerplate, not code.
  • Not an LLM gateway. Bring your own API key.
  • Not a vector database. It calls MCP tools that do RAG — it doesn't implement retrieval itself.
  • Not a web framework. Cortex runs inside FastAPI/Django/Flask/Click.

Documentation

Document Read this if you want to...
Overview ...understand what Cortex is, who it's for, and why it exists
Architecture ...see the internals: primary agent, task graph, MCP agents, validation
Features ...scan the full feature matrix
Getting Started ...build your first agent with working code
Use Cases ...see real-world scenarios and reference architectures
Configuration ...look up every cortex.yaml field
CLI Reference ...look up every cortex subcommand
Deployment ...ship to production
FAQ ...find answers to common gotchas
Contributing ...report bugs or submit PRs

Community & support

  • Issues: file bugs and feature requests on GitHub Issues
  • Discussions: ask questions on GitHub Discussions
  • Security: report vulnerabilities privately, not in public issues

License

MIT — see LICENSE. Use it commercially, fork it, ship it.

Define once. Deploy anywhere. Let it learn.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cortex_agent_framework-1.1.0.tar.gz (165.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cortex_agent_framework-1.1.0-py3-none-any.whl (194.7 kB view details)

Uploaded Python 3

File details

Details for the file cortex_agent_framework-1.1.0.tar.gz.

File metadata

  • Download URL: cortex_agent_framework-1.1.0.tar.gz
  • Upload date:
  • Size: 165.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cortex_agent_framework-1.1.0.tar.gz
Algorithm Hash digest
SHA256 01a6e261797e8c89cf56d93361cbcb25ff057c7bb1341177e6f1ff504dffb82b
MD5 6bf0c209e6eb8ae1cf2b9960911145ac
BLAKE2b-256 506c1957af4a928220d7b2bad6609f767a88cd6e0a8fd26b3ae7886e8d7815b1

See more details on using hashes here.

Provenance

The following attestation bundles were made for cortex_agent_framework-1.1.0.tar.gz:

Publisher: publish.yml on kritird/Cortex-Agent-Framework

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cortex_agent_framework-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for cortex_agent_framework-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e882529629cb78496d4c7546eb4d533f900858c800f1b713bcf307fbb3a9e476
MD5 32b6d620f5e7e86eb63076740c8375b1
BLAKE2b-256 6a9953fb2a193dce6276e1899837c1de5d3f4612683e6b490939968b5854cd2d

See more details on using hashes here.

Provenance

The following attestation bundles were made for cortex_agent_framework-1.1.0-py3-none-any.whl:

Publisher: publish.yml on kritird/Cortex-Agent-Framework

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page