Skip to main content

AgentFoundry: A modular autonomous AI agent framework

Project description

AIgent

AIgent is a modular, extensible AI framework designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.

Features

  • Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
  • Cython-compiled backend for performance and IP protection
  • Integration with popular frameworks such as LangChain, Milvus, and OpenAI
  • Support for licensed or embedded deployments via license file verification or compiled-only distribution
  • Configurable with runtime enforcement of execution licenses (RSA-signed, machine-bound)
  • Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
  • Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing

Use Cases

AIgent is designed to serve as a core intelligence engine for:

  • Secure enterprise AI platforms (e.g., QuantumDrive)
  • Compliance monitoring and rule-based alerting systems
  • Conversational interfaces with dynamic tool execution
  • Embedded agents in SaaS and on-premise environments

Requirements

  • Python 3.11+
  • Cython
  • Compatible dependencies (see requirements.txt)

Configuration

AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with AgentConfig.

Explicit config (recommended)

from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-4o",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})

registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)

Legacy config (backward compatibility)

Set a config file explicitly and/or use environment variables:

export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()

See docs/Configuration_Guide.md for full key reference and precedence rules.

Provider notes

  • LLM provider is selected by LLM_PROVIDER (openai, ollama, grok, gemini). OpenAI requires OPENAI_API_KEY when selected.
  • Vector store provider is selected by VECTORSTORE_PROVIDER (milvus, faiss).
    • Milvus: set MILVUS_URI or MILVUS_HOST + MILVUS_PORT.
    • FAISS: requires an existing index at FAISS_INDEX_PATH.
  • ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if AF_DISABLE_OPENAI_EMBEDDINGS=1 or no API key is present.
  • The DuckDB KGraph backend requires duckdb and its ADBC drivers.

Author

Christopher Steel
AI Practice Lead, AlphaSix Corporation
Founder, Syntheticore, Inc.
Email: csteel@syntheticore.com

Licensing and Legal Notice

© Syntheticore, Inc. All rights reserved.

This software is proprietary and confidential.
Any use, reproduction, modification, distribution, or commercial deployment of AIgent or any part thereof requires explicit written authorization from Syntheticore, Inc.

Unauthorized use is strictly prohibited and may result in legal action.


For licensing inquiries or permission to use this software, please contact:
📧 csteel@syntheticore.com

Gradio Chat Interface

A simple Gradio-based chat interface for interacting with the HybridOrchestrator agent.

Prerequisites

  • Ensure you have credentials for your selected LLM provider. For OpenAI:
export OPENAI_API_KEY=<your_api_key>

Running the App

python gradio_app.py

The interface will be available at http://localhost:7860 by default.

API Server

Genie can be accessed programmatically via a FastAPI‑based HTTP API. Two main endpoints are provided:

  • POST /v1/chat: Send or continue a multi-turn conversation. Accepts JSON payload with conversation history and returns the assistant reply and updated history.
  • POST /v1/orchestrate: Discover APIs and execute a main task across all agents. Returns aggregated results.
  • POST /v1/cancel: Cancel an in-flight request by user_id and thread_id.
  • GET /health: Health check endpoint.

If a backend is unreachable at startup and FAIL_FAST is enabled (the default), the server returns 503 Service Unavailable with a JSON error body.

Prerequisites

  • Ensure you have credentials for your selected LLM provider. For OpenAI:
export OPENAI_API_KEY=<your_api_key>
  • Install FastAPI and Uvicorn (if not already):
pip install fastapi uvicorn[standard]

Running the API

python api_server.py
# Or with auto‑reload during development:
uvicorn api_server:app --reload --host 0.0.0.0 --port 8000

Interactive API docs will be available at http://localhost:8000/docs

  • For Microsoft Graph access (entra_tool), forward the SPA's bearer token in the Authorization: Bearer <token> header; the API server injects it into the orchestrator config as entra_user_assertion for on-behalf-of token exchange.

Fail-Fast Initialization

By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a FatalInitializationError is raised. This exception inherits from BaseException (not Exception), so it escapes generic except Exception handlers and kills the process — ensuring broken deployments are caught immediately.

To allow degraded startup (e.g. for development without all backends available), set FAIL_FAST to false:

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_FAIL_FAST": "false",  # warn on backend failures instead of crashing
})

Or via environment variable:

export AF_FAIL_FAST=false

To catch the error explicitly in your application:

from agentfoundry.utils.exceptions import FatalInitializationError

try:
    orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
    logger.critical("Backends unavailable: %s", exc)
    # Start in limited mode or exit

The LLM provider is verified with a lightweight ping (llm.invoke([HumanMessage("ping")])) to catch invalid API keys at startup rather than on the first user request. Vector store providers call verify_connectivity() to eagerly test the backend connection.

See docs/Configuration_Guide.md for the full FAIL_FAST reference.

Logging & Debugging

AgentFoundry uses standard Python logging throughout. Every module uses logging.getLogger(__name__) for hierarchical logger naming. If the host application does not configure logging, agentfoundry.utils.logger.get_logger() will create a default log file at ./logs/agentforge.log.

Logging strategy

  • INFO level — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status. In production, INFO gives visibility that everything started correctly.
  • DEBUG level — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details. Enable for troubleshooting.
  • Timing measurements — Critical operations (LLM invoke, vector store queries, architect planning) are timed with time.perf_counter() and logged with durations in milliseconds.

Controlling log level

Set LOG_LEVEL in your config:

config = AgentConfig.from_dict({
    "AF_LOG_LEVEL": "DEBUG",
    # ...
})

Or configure logging directly:

from agentfoundry.utils.logger import setup_logging

setup_logging(level="INFO", logfile="agentfoundry.log")

Notes

  • ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
  • FAISS provider raises if FAISS_INDEX_PATH does not exist; initialize with your ingestion tooling.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentfoundry-1.4.32-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

File details

Details for the file agentfoundry-1.4.32-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for agentfoundry-1.4.32-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 579e7969227375ac44ecaafd47c73e6d7faa0075d28c09ab7adba9f7dcd1ad1f
MD5 2689b4947142f17f05e88382edfd7785
BLAKE2b-256 0e7ea7239900b55ef922e45a7c021370455f44ead8152a1ab77e4a9d50c48c68

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page