Skip to main content

AgentFoundry: A modular autonomous AI agent framework

Project description

AIgent

AIgent is a modular, extensible AI framework designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.

Features

  • Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
  • Cython-compiled backend for performance and IP protection
  • Integration with popular frameworks such as LangChain, Milvus, and OpenAI
  • Support for licensed or embedded deployments via license file verification or compiled-only distribution
  • Configurable with runtime enforcement of execution licenses (RSA-signed, machine-bound)
  • Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
  • Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing

Use Cases

AIgent is designed to serve as a core intelligence engine for:

  • Secure enterprise AI platforms (e.g., QuantumDrive)
  • Compliance monitoring and rule-based alerting systems
  • Conversational interfaces with dynamic tool execution
  • Embedded agents in SaaS and on-premise environments

Requirements

  • Python 3.11+
  • Cython
  • Compatible dependencies (see requirements.txt)

Configuration

AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with AgentConfig.

Explicit config (recommended)

from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-4o",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})

registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)

Legacy config (backward compatibility)

Set a config file explicitly and/or use environment variables:

export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()

See docs/Configuration_Guide.md for full key reference and precedence rules.

Provider notes

  • LLM provider is selected by LLM_PROVIDER (openai, ollama, grok, gemini). OpenAI requires OPENAI_API_KEY when selected.
  • Vector store provider is selected by VECTORSTORE_PROVIDER (milvus, faiss).
    • Milvus: set MILVUS_URI or MILVUS_HOST + MILVUS_PORT.
    • FAISS: requires an existing index at FAISS_INDEX_PATH.
  • ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if AF_DISABLE_OPENAI_EMBEDDINGS=1 or no API key is present.
  • The DuckDB KGraph backend requires duckdb and its ADBC drivers.

Author

Christopher Steel
AI Practice Lead, AlphaSix Corporation
Founder, Syntheticore, Inc.
Email: csteel@syntheticore.com

Licensing and Legal Notice

© Syntheticore, Inc. All rights reserved.

This software is proprietary and confidential.
Any use, reproduction, modification, distribution, or commercial deployment of AIgent or any part thereof requires explicit written authorization from Syntheticore, Inc.

Unauthorized use is strictly prohibited and may result in legal action.


For licensing inquiries or permission to use this software, please contact:
📧 csteel@syntheticore.com

Gradio Chat Interface

A simple Gradio-based chat interface for interacting with the HybridOrchestrator agent.

Prerequisites

  • Ensure you have credentials for your selected LLM provider. For OpenAI:
export OPENAI_API_KEY=<your_api_key>

Running the App

python gradio_app.py

The interface will be available at http://localhost:7860 by default.

API Server

Genie can be accessed programmatically via a FastAPI‑based HTTP API. Two main endpoints are provided:

  • POST /v1/chat: Send or continue a multi-turn conversation. Accepts JSON payload with conversation history and returns the assistant reply and updated history.
  • POST /v1/orchestrate: Discover APIs and execute a main task across all agents. Returns aggregated results.
  • POST /v1/cancel: Cancel an in-flight request by user_id and thread_id.
  • GET /health: Health check endpoint.

If a backend is unreachable at startup and FAIL_FAST is enabled (the default), the server returns 503 Service Unavailable with a JSON error body.

Prerequisites

  • Ensure you have credentials for your selected LLM provider. For OpenAI:
export OPENAI_API_KEY=<your_api_key>
  • Install FastAPI and Uvicorn (if not already):
pip install fastapi uvicorn[standard]

Running the API

python api_server.py
# Or with auto‑reload during development:
uvicorn api_server:app --reload --host 0.0.0.0 --port 8000

Interactive API docs will be available at http://localhost:8000/docs

  • For Microsoft Graph access (entra_tool), forward the SPA's bearer token in the Authorization: Bearer <token> header; the API server injects it into the orchestrator config as entra_user_assertion for on-behalf-of token exchange.

Fail-Fast Initialization

By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a FatalInitializationError is raised. This exception inherits from BaseException (not Exception), so it escapes generic except Exception handlers and kills the process — ensuring broken deployments are caught immediately.

To allow degraded startup (e.g. for development without all backends available), set FAIL_FAST to false:

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_FAIL_FAST": "false",  # warn on backend failures instead of crashing
})

Or via environment variable:

export AF_FAIL_FAST=false

To catch the error explicitly in your application:

from agentfoundry.utils.exceptions import FatalInitializationError

try:
    orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
    logger.critical("Backends unavailable: %s", exc)
    # Start in limited mode or exit

The LLM provider is verified with a lightweight ping (llm.invoke([HumanMessage("ping")])) to catch invalid API keys at startup rather than on the first user request. Vector store providers call verify_connectivity() to eagerly test the backend connection.

See docs/Configuration_Guide.md for the full FAIL_FAST reference.

Logging & Debugging

AgentFoundry uses standard Python logging throughout. Every module uses logging.getLogger(__name__) for hierarchical logger naming. If the host application does not configure logging, agentfoundry.utils.logger.get_logger() will create a default log file at ./logs/agentforge.log.

Logging strategy

  • INFO level — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status. In production, INFO gives visibility that everything started correctly.
  • DEBUG level — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details. Enable for troubleshooting.
  • Timing measurements — Critical operations (LLM invoke, vector store queries, architect planning) are timed with time.perf_counter() and logged with durations in milliseconds.

Controlling log level

Set LOG_LEVEL in your config:

config = AgentConfig.from_dict({
    "AF_LOG_LEVEL": "DEBUG",
    # ...
})

Or configure logging directly:

from agentfoundry.utils.logger import setup_logging

setup_logging(level="INFO", logfile="agentfoundry.log")

Notes

  • ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
  • FAISS provider raises if FAISS_INDEX_PATH does not exist; initialize with your ingestion tooling.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

agentfoundry-1.4.29-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (14.0 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

agentfoundry-1.4.29-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (2.8 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

File details

Details for the file agentfoundry-1.4.29-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for agentfoundry-1.4.29-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 1ce861abbc7443a69e24903236340aa67c4e52a74f28566e89184a70fb9e30b2
MD5 3d62b8e32402000d1b310a2d52cbe49e
BLAKE2b-256 0e8c6db3d9e31a804cdc1b6a492f007c354f441e97447637f0b7ed718ab37ac7

See more details on using hashes here.

File details

Details for the file agentfoundry-1.4.29-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for agentfoundry-1.4.29-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 bfade23bfae8ce56011061be929bfcab67fee47eee2b628b4a75964372493ea6
MD5 19816ebddc7752906f999f8cf179082e
BLAKE2b-256 f5edca405babda56f16e5f7b27bd9da39ed09a4dcfdc71ed2ae8399c7f8e9ecd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page