Skip to main content

AgentFoundry: A modular autonomous AI agent framework

Project description

AgentFoundry

AgentFoundry is a modular, extensible AI library designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.

Features

  • Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
  • Cython-compiled backend for performance and IP protection
  • Integration with popular frameworks such as LangChain, LangGraph, Milvus, and OpenAI
  • Workflow lifecycle management with delivery waves (pause/resume/cancel/retry/replan)
  • Support for licensed or embedded deployments via license file verification or compiled-only distribution
  • Configurable with runtime enforcement of execution licenses (PQC-signed, optionally machine-bound)
  • Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
  • Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing

Use Cases

AgentFoundry is designed to be embedded as a core intelligence engine in:

  • Secure enterprise AI platforms
  • Compliance monitoring and rule-based alerting systems
  • Applications with dynamic tool execution
  • SaaS and on-premise environments

Requirements

  • Python 3.11+
  • Cython
  • Compatible dependencies (see requirements.txt)

Quick Start

from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-4o",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})

registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)

# Run a task
result = orchestrator.run_task("Summarize recent activity", config={
    "configurable": {"user_id": "u1", "thread_id": "t1", "org_id": "myorg"}
})

Configuration

AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with AgentConfig.

Explicit config (recommended)

from agentfoundry.utils.agent_config import AgentConfig

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-4o",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})

Legacy config (backward compatibility)

Set a config file explicitly and/or use environment variables:

export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()

See docs/Configuration_Guide.md for full key reference and precedence rules.

Provider notes

  • LLM provider is selected by LLM_PROVIDER (openai, ollama, grok, gemini). OpenAI requires OPENAI_API_KEY when selected.
  • Vector store provider is selected by VECTORSTORE_PROVIDER (milvus, faiss).
    • Milvus: set MILVUS_URI or MILVUS_HOST + MILVUS_PORT.
    • FAISS: requires an existing index at FAISS_INDEX_PATH.
  • ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if AF_DISABLE_OPENAI_EMBEDDINGS=1 or no API key is present.
  • The DuckDB KGraph backend requires duckdb and its ADBC drivers.

Workflow Lifecycle

AgentFoundry includes a workflow engine that organises execution plans into dependency-ordered delivery waves. See docs/Workflow_Lifecycle_Guide.md for full documentation.

from agentfoundry.agents.workflow import WorkflowManager, build_waves

manager = WorkflowManager(task_executor=my_executor)
wf = manager.create_workflow(plan, config=config)
manager.run(wf.workflow_id)        # execute all waves
manager.pause(wf.workflow_id)      # pause between waves
manager.resume(wf.workflow_id)     # continue from pause
manager.retry(wf.workflow_id)      # retry failed tasks
manager.cancel(wf.workflow_id)     # cancel and skip remaining
manager.replan(wf.workflow_id)     # regenerate plan from current state

Fail-Fast Initialization

By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a FatalInitializationError is raised. This exception inherits from BaseException (not Exception), so it escapes generic except Exception handlers — ensuring broken deployments are caught immediately.

To allow degraded startup (e.g. for development without all backends available):

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_FAIL_FAST": "false",  # warn on backend failures instead of crashing
})

To catch the error explicitly:

from agentfoundry.utils.exceptions import FatalInitializationError

try:
    orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
    logger.critical("Backends unavailable: %s", exc)

See docs/Configuration_Guide.md for the full FAIL_FAST reference.

Logging & Debugging

AgentFoundry uses standard Python logging throughout. Every module uses logging.getLogger(__name__) for hierarchical logger naming.

Logging strategy

  • INFO level — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status.
  • DEBUG level — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details.
  • Timing measurements — Critical operations (LLM invoke, vector store queries, architect planning) are timed and logged with durations in milliseconds.

Controlling log level

config = AgentConfig.from_dict({
    "AF_LOG_LEVEL": "DEBUG",
})

Or configure logging directly:

from agentfoundry.utils.logger import setup_logging
setup_logging(level="INFO", logfile="agentfoundry.log")

Notes

  • ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
  • FAISS provider raises if FAISS_INDEX_PATH does not exist; initialize with your ingestion tooling.

Author

Christopher Steel AI Practice Lead, AlphaSix Corporation Founder, Syntheticore, Inc. Email: csteel@syntheticore.com

Licensing and Legal Notice

© Syntheticore, Inc. All rights reserved.

This software is proprietary and confidential. Any use, reproduction, modification, distribution, or commercial deployment of AgentFoundry or any part thereof requires explicit written authorization from Syntheticore, Inc.

Unauthorized use is strictly prohibited and may result in legal action.


For licensing inquiries or permission to use this software, please contact: csteel@syntheticore.com

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

agentfoundry-1.5.12-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (4.9 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

agentfoundry-1.5.12-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (4.9 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ ARM64

File details

Details for the file agentfoundry-1.5.12-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for agentfoundry-1.5.12-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 d6e2f05a8d4a04224a597d94a60ef370e101e1f397368adc6c3edde3b0c9e137
MD5 3ddf969a9a704f7248b35dec9b7f62cc
BLAKE2b-256 e0a5d9d949b0e7154e1f204f07bfb2777dd698e949ac4ad25219005f6a5d748c

See more details on using hashes here.

File details

Details for the file agentfoundry-1.5.12-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for agentfoundry-1.5.12-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 ddaee6381931eced24e24f096704f0b676e01633808a9c60b52e0ae5e5e5a3a1
MD5 9d23b7efc84131aaceccc734e1cc46b1
BLAKE2b-256 146d7bc67e69017714f4eaba64f8b014f3db5081b30e0d59e9839c3012fee83c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page