Skip to main content

Delfhos — AI agent SDK with typed connections and tool orchestration

Project description

Delfhos

Python SDK for building AI agents that use real tools — Gmail, SQL, Drive, Sheets, REST APIs, and your own functions — with safe, human-in-the-loop execution.

Full documentation at delfhos.com/docs


How it works

You describe a task in plain English. Delfhos:

  1. Picks the relevant tools from the ones you configured
  2. Writes Python code to accomplish the task
  3. Executes that code in a sandbox against your real services
  4. Retries automatically if something fails

You stay in control: restrict which actions each tool can take, and require approval before any write, send, or delete.


Install

pip install delfhos

API Key

Delfhos supports Gemini, OpenAI, and Anthropic models. Export the key for the provider you want to use:

export GOOGLE_API_KEY="..."    # Gemini
export OPENAI_API_KEY="..."    # OpenAI
export ANTHROPIC_API_KEY="..."  # Claude

Try it instantly (no credentials needed)

The sandbox tools come pre-loaded with dummy data so you can run your first agent right now:

from delfhos import Agent
from delfhos.sandbox import MockEmail, MockDatabase

agent = Agent(
    tools=[MockEmail(confirm=False), MockDatabase(confirm=False)],
    llm="gemini-3.1-flash-lite-preview",
)

agent.run(
    "Read my unread emails. If any mention a support ticket, "
    "look it up in the database and summarise the customer name, "
    "open tickets, and total order value."
)
agent.stop()

Or just run the included example:

python examples/hello_delfhos.py

What it looks like end-to-end:

Input ──────────────────────────────────────────────────────────
"Read my unread emails. If any mention a support ticket,
 look it up in the database and reply with a short summary of
 the customer's name, their open tickets, and their total order value."

Agent ──────────────────────────────────────────────────────────
  [tool]  MockEmail.list_unread_emails()
  [tool]  MockDatabase.query("SELECT * FROM tickets WHERE id = 'TCK8843'")
  [tool]  MockDatabase.query("SELECT * FROM users WHERE email = 'alice@example.com'")
  [tool]  MockDatabase.query("SELECT SUM(amount) FROM orders WHERE user_id = 1")
  [tool]  MockEmail.send_email(to="alice@example.com", subject="Re: Overdue invoice")

Output ─────────────────────────────────────────────────────────
Sent a reply to alice@example.com.

Summary:
  Customer:     Alice (alice@example.com)
  Open tickets: TCK8843 — "Invoice #1042 overdue" (open)
  Total orders: $2,340.00
r = agent.run("...")
print(r.text)        # "Sent a reply to alice@example.com. Summary: ..."
print(r.status)      # True
print(r.cost_usd)    # 0.00021
print(r.duration_ms) # 3847

Custom tools

Decorate any Python function with @tool and the agent can call it:

from delfhos import Agent, tool

@tool
def calculate_discount(price: float, pct: float) -> float:
    """Return price after applying a percentage discount."""
    return price * (1 - pct / 100)

agent = Agent(tools=[calculate_discount], llm="gemini-3.1-flash-lite-preview")
agent.run("What is the price of a $120 item with a 15% discount?")
agent.stop()

Built-in tools

from delfhos import Gmail, SQL, Sheets, Drive, Calendar, Docs, WebSearch, APITool
gmail = Gmail(oauth_credentials="client_secrets.json", allow=["read", "send"], confirm=["send"])
db    = SQL(url="postgresql://user:pass@host/db",       allow=["schema", "query"])
drive = Drive(oauth_credentials="client_secrets.json",  confirm=True)

agent = Agent(tools=[gmail, db, drive], llm="gemini-3.1-flash-lite-preview")
agent.run("Check unread emails and log any order mentions to the database.")
agent.stop()

allow — restrict which actions are available on the tool (["read", "send"], ["schema", "query"], …).
confirm — when human approval is required: True (all), False (none), or a list of specific actions.


REST API Integration (APITool)

Connect any REST API with an OpenAPI 3.x specification — no custom code needed.

from delfhos import Agent, APITool

# From a public OpenAPI spec
petstore = APITool(
    spec="https://petstore3.swagger.io/api/v3/openapi.json",
    allow=["list_pets", "get_pet_by_id"],
    confirm=["create_pet", "delete_pet"],
)

# From a local spec with authentication
internal = APITool(
    spec="./openapi.yaml",
    base_url="https://api.internal.corp/v1",
    headers={"Authorization": "Bearer sk_..."},
)

# Auto-inject fixed path variables (e.g., company/org IDs baked into URLs)
adobe = APITool(
    spec="./adobe_analytics.json",
    base_url="https://analytics.adobe.io",
    headers={"Authorization": "Bearer ...", "x-api-key": "..."},
    path_params={"globalCompanyId": "mycompany"},  # injected into /api/{globalCompanyId}/...
)

# Inspect available endpoints
print(petstore.inspect())  # Compact: endpoint names
print(petstore.inspect(verbose=True))  # Detailed: methods, paths, descriptions

agent = Agent(tools=[petstore, internal], llm="gemini-2.5-flash")
agent.run("List all pets and create a new one named 'Buddy'")

Features:

  • Automatic endpoint compilation from OpenAPI specs (no LLM needed)
  • Path, query, and request body parameters extracted and typed
  • headers=, params=, and path_params= injected automatically — agent never sees credentials or fixed path variables
  • $ref resolution for complex schemas
  • allow= and confirm= support for fine-grained access control
  • Caching: specs compiled once and cached to ~/delfhos/api_cache/

Interactive chat

from delfhos import Agent, Chat, Gmail

agent = Agent(
    tools=[Gmail(oauth_credentials="client_secrets.json")],
    llm="gemini-3.1-flash-lite-preview",
    chat=Chat(summarizer_llm="gemini-3.1-flash-lite-preview"),
)

agent.run_chat()  # starts a terminal session — type /help for commands

Memory & Long-term Context

Delfhos supports both session memory and persistent semantic memory with 100+ embedding models.

from delfhos import Agent, Chat, Memory

agent = Agent(
    tools=[...],
    llm="gemini-3.1-flash-lite-preview",
    chat=Chat(keep=8, summarize=True, namespace="my_agent"),    # short-term
    memory=Memory(namespace="my_agent"),                         # long-term semantic
)

100+ Embedding Models: Automatic detection and compatibility for:

  • Proprietary: OpenAI, Cohere, Anthropic, Google
  • Open-source: Sentence-Transformers (MiniLM, all-MiniLM, all-mpnet, etc.)
  • Specialized: BGE models (Alibaba), Jina, Nomic Embed, NV-Embed
  • Local-first: Run models locally via Ollama or Hugging Face Transformers

Auto-detects model requirements:

  • trust_remote_code toggles (for BGE, Jina, etc.)
  • Instruction/prefix tokens (e.g., Nomic's "search_document:" prefix)
  • Model dimensions (inferred after loading)

See EMBEDDING_MODELS_GUIDE.md for the full compatibility matrix.


Response object

agent.run() returns a Response with the result, status, cost, and trace:

r = agent.run("How many users signed up this week?")

print(r.text)        # agent's answer
print(r.status)      # True if task succeeded
print(r.cost_usd)    # cost in dollars (e.g. 0.0003)
print(r.duration_ms) # wall-clock time in milliseconds

Model support

Cloud providers: Gemini, OpenAI, or Anthropic

# Gemini
agent = Agent(tools=[...], llm="gemini-2.0-flash-lite")
agent = Agent(tools=[...], llm="gemini-2.0-flash")

# OpenAI
agent = Agent(tools=[...], llm="gpt-5")
agent = Agent(tools=[...], llm="gpt-4o")

# Anthropic
agent = Agent(tools=[...], llm="claude-4-5-haiku")
agent = Agent(tools=[...], llm="claude-4-6-sonnet")

Local & custom models: Use LLMConfig for any OpenAI-compatible endpoint

from delfhos import Agent, LLMConfig

# Local Ollama model
agent = Agent(
    tools=[...],
    llm=LLMConfig(model="llama3.2", base_url="http://localhost:11434/v1")
)

# Enterprise vLLM server
agent = Agent(
    tools=[...],
    llm=LLMConfig(
        model="mistral-7b-instruct",
        base_url="https://llm.corp.internal/v1",
        api_key="internal-token"
    )
)

# Any OpenAI-compatible provider (Groq, Together, Anyscale, etc.)
agent = Agent(
    tools=[...],
    llm=LLMConfig(
        model="meta-llama/Llama-3-70b-chat-hf",
        base_url="https://api.together.xyz/v1",
        api_key="..."
    )
)

Dual-LLM optimization: Use fast local + strong cloud model

agent = Agent(
    tools=[...],
    light_llm=LLMConfig(model="qwen2.5:7b", base_url="http://localhost:11434/v1"),
    heavy_llm="gemini-2.5-flash",  # or Claude, OpenAI, etc.
)

Context manager

The agent cleans up automatically when used as a context manager:

with Agent(tools=[...], llm="gemini-3.1-flash-lite-preview") as agent:
    agent.run("Summarise last week's sales and email it to the team.")

For the full API reference and advanced guides see DOCS.md or delfhos.com/docs.

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

delfhos-0.7.1.1.tar.gz (321.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

delfhos-0.7.1.1-py3-none-any.whl (335.4 kB view details)

Uploaded Python 3

File details

Details for the file delfhos-0.7.1.1.tar.gz.

File metadata

  • Download URL: delfhos-0.7.1.1.tar.gz
  • Upload date:
  • Size: 321.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for delfhos-0.7.1.1.tar.gz
Algorithm Hash digest
SHA256 c0bd3f9bcc4117e8bc3cd40192572d6183ac1ccd9e4d6dba24220bda077dda08
MD5 870ec4b2009da4220a0a3fbc8812ab2a
BLAKE2b-256 2f0637104762ccea8b08ccbfc37d52ac969796cabcd97712ddba0e487074adc0

See more details on using hashes here.

File details

Details for the file delfhos-0.7.1.1-py3-none-any.whl.

File metadata

  • Download URL: delfhos-0.7.1.1-py3-none-any.whl
  • Upload date:
  • Size: 335.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for delfhos-0.7.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 28caf1021f40c590bb6bb98db7a0538b44e9e8d5f1ca73450b6de97542147a70
MD5 71c27b6c858cef39a5e01cc02a96ba31
BLAKE2b-256 8753e4aeb11b25c2b4a60443f5f406e0b7841843bb16169476b4613322415ca3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page