AgentFoundry: A modular autonomous AI agent framework
Project description
AgentFoundry
AgentFoundry is a modular, extensible AI library designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.
Features
- Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
- Cython-compiled backend for performance and IP protection
- Integration with popular frameworks such as LangChain, LangGraph, Milvus, and OpenAI
- Workflow lifecycle management with delivery waves (pause/resume/cancel/retry/replan)
- Support for licensed or embedded deployments via license file verification or compiled-only distribution
- Configurable with runtime enforcement of execution licenses (PQC-signed, optionally machine-bound)
- Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
- Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing
Use Cases
AgentFoundry is designed to be embedded as a core intelligence engine in:
- Secure enterprise AI platforms
- Compliance monitoring and rule-based alerting systems
- Applications with dynamic tool execution
- SaaS and on-premise environments
Requirements
- Python 3.11+
- Cython
- Compatible dependencies (see
requirements.txt)
Quick Start
from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator
config = AgentConfig.from_dict({
"AF_LLM_PROVIDER": "openai",
"AF_OPENAI_API_KEY": "sk-...",
"AF_OPENAI_MODEL": "gpt-4o",
"AF_VECTORSTORE_PROVIDER": "milvus",
"AF_MILVUS_URI": "http://localhost:19530",
})
registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)
# Run a task
result = orchestrator.run_task("Summarize recent activity", config={
"configurable": {"user_id": "u1", "thread_id": "t1", "org_id": "myorg"}
})
Configuration
AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with AgentConfig.
Explicit config (recommended)
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_dict({
"AF_LLM_PROVIDER": "openai",
"AF_OPENAI_API_KEY": "sk-...",
"AF_OPENAI_MODEL": "gpt-4o",
"AF_VECTORSTORE_PROVIDER": "milvus",
"AF_MILVUS_URI": "http://localhost:19530",
})
Legacy config (backward compatibility)
Set a config file explicitly and/or use environment variables:
export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()
See docs/Configuration_Guide.md for full key reference and precedence rules.
Provider notes
- LLM provider is selected by
LLM_PROVIDER(openai,ollama,grok,gemini). OpenAI requiresOPENAI_API_KEYwhen selected. - Vector store provider is selected by
VECTORSTORE_PROVIDER(milvus,faiss).- Milvus: set
MILVUS_URIorMILVUS_HOST+MILVUS_PORT. - FAISS: requires an existing index at
FAISS_INDEX_PATH.
- Milvus: set
- ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if
AF_DISABLE_OPENAI_EMBEDDINGS=1or no API key is present. - The DuckDB KGraph backend requires
duckdband its ADBC drivers.
Workflow Lifecycle
AgentFoundry includes a workflow engine that organises execution plans into dependency-ordered delivery waves. See docs/Workflow_Lifecycle_Guide.md for full documentation.
from agentfoundry.agents.workflow import WorkflowManager, build_waves
manager = WorkflowManager(task_executor=my_executor)
wf = manager.create_workflow(plan, config=config)
manager.run(wf.workflow_id) # execute all waves
manager.pause(wf.workflow_id) # pause between waves
manager.resume(wf.workflow_id) # continue from pause
manager.retry(wf.workflow_id) # retry failed tasks
manager.cancel(wf.workflow_id) # cancel and skip remaining
manager.replan(wf.workflow_id) # regenerate plan from current state
Fail-Fast Initialization
By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a FatalInitializationError is raised. This exception inherits from BaseException (not Exception), so it escapes generic except Exception handlers — ensuring broken deployments are caught immediately.
To allow degraded startup (e.g. for development without all backends available):
config = AgentConfig.from_dict({
"AF_LLM_PROVIDER": "openai",
"AF_OPENAI_API_KEY": "sk-...",
"AF_FAIL_FAST": "false", # warn on backend failures instead of crashing
})
To catch the error explicitly:
from agentfoundry.utils.exceptions import FatalInitializationError
try:
orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
logger.critical("Backends unavailable: %s", exc)
See docs/Configuration_Guide.md for the full FAIL_FAST reference.
Logging & Debugging
AgentFoundry uses standard Python logging throughout. Every module uses logging.getLogger(__name__) for hierarchical logger naming.
Logging strategy
- INFO level — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status.
- DEBUG level — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details.
- Timing measurements — Critical operations (LLM invoke, vector store queries, architect planning) are timed and logged with durations in milliseconds.
Controlling log level
config = AgentConfig.from_dict({
"AF_LOG_LEVEL": "DEBUG",
})
Or configure logging directly:
from agentfoundry.utils.logger import setup_logging
setup_logging(level="INFO", logfile="agentfoundry.log")
Notes
- ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
- FAISS provider raises if
FAISS_INDEX_PATHdoes not exist; initialize with your ingestion tooling.
Author
Christopher Steel
AI Practice Lead, AlphaSix Corporation
Founder, Syntheticore, Inc.
Email: csteel@syntheticore.com
Licensing and Legal Notice
© Syntheticore, Inc. All rights reserved.
This software is proprietary and confidential. Any use, reproduction, modification, distribution, or commercial deployment of AgentFoundry or any part thereof requires explicit written authorization from Syntheticore, Inc.
Unauthorized use is strictly prohibited and may result in legal action.
For licensing inquiries or permission to use this software, please contact: csteel@syntheticore.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentfoundry-1.4.58-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.
File metadata
- Download URL: agentfoundry-1.4.58-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
- Upload date:
- Size: 4.6 MB
- Tags: CPython 3.12, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92f161957ef5e7c29883bf46a110968150bf59123b7c8176506167fe6ead5bb5
|
|
| MD5 |
56604752120b317579344aac59448ffd
|
|
| BLAKE2b-256 |
1d7bd86adc987f88052eb25d4c807c474fc5234cea226ff917d8c6a1fc403561
|