Skip to main content

Alloy (Python): Python for logic. English for intelligence.

Project description

Alloy (Python)

Python for logic. English for intelligence.

Alloy lets you write typed AI functions. Decorate a Python function with @command(output=MyType), call any supported model, and get a MyType back — enforced via provider‑native structured outputs. Add Python tools with design‑by‑contract to keep agent loops reliable.

CI Docs Docs Site PyPI Downloads License: MIT

License: MIT

This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.

Stability: OpenAI backend is Stable; Anthropic is Beta; Gemini/Ollama are Experimental. “Stable” isn’t bug‑free — issues are tracked and fixed as prioritized.

Example: CSV to API

from alloy import command
import pandas as pd

@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, endpoint_example: str) -> str:
    """Intelligently map CSV columns to API format."""
    return f"Map this data {df.head()} to match API: {endpoint_example}"

df = pd.read_csv("messy_customer_data.csv")
api_calls = csv_to_api(df, "POST /customers {fullName, emailAddress, subscriptionTier}")
for payload in api_calls:
    requests.post("https://api.your-saas.com/customers", json=payload)

This example maps rows from a DataFrame into request payloads for an API, returning a typed list[dict] you can post.

Quick start

  • Install (all providers): pip install 'alloy-ai[providers]'
  • Or minimal (OpenAI only): pip install alloy-ai
  • Create .env with OPENAI_API_KEY=...
  • Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure

load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)

@command(output=float)
def extract_price(text: str) -> str:
    """Extract price from text."""
    return f"Extract the price (number only) from: {text}"

print(extract_price("This item costs $49.99."))
print(ask("Say hi"))

Enforcing outputs

Design by Contract (tools)

from alloy import tool, require, ensure

@tool
@require(lambda ba: "validated_at" in ba.arguments["data"], "run validate_data first")
@ensure(lambda ok: ok is True, "save must succeed")
def save_to_production(data: dict) -> bool:
    return True

Contract failures surface as tool output, allowing the model to self‑correct.

Progressive path

  • Start exploratory: ask("...")
  • Add a command: @command → returns str
  • Enforce types: @command(output=T)
  • Add tools + DBC: @command(output=T, tools=[...]) with @require/@ensure

Notes

  • OpenAI backend is implemented for sync/async/streaming.
  • Streaming with tools is not yet supported.
  • Structured outputs: Alloy uses provider JSON Schema features (OpenAI/Anthropic/Gemini). See Enforcing outputs above.
  • Configuration defaults: Alloy uses model=gpt-5-mini if configure(...) is not called. You can also set process environment variables instead of a .env file:
    • ALLOY_MODEL, ALLOY_TEMPERATURE, ALLOY_MAX_TOKENS, ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM, ALLOY_RETRY, ALLOY_MAX_TOOL_TURNS.
    • Example: export ALLOY_MODEL=gpt-4o then run your script.
  • Output types today: primitives and dataclasses (strict mode); TypedDict outputs planned.
  • OpenAI strict mode: if a tool loop completes without a final structured output, Alloy issues one follow‑up turn (no tools) to finalize; then raises if still missing.

Examples

  • See examples/basic_usage.py, examples/tools_demo.py (tools + contracts), and examples/csv_to_api.py.

Optional: offline dev tip

  • For local demos without network/API keys, set ALLOY_BACKEND=fake (not for production).
  • Example: ALLOY_BACKEND=fake python examples/basic_usage.py

Config precedence

  • Defaults: model=gpt-5-mini, max_tool_turns=2 (safe default).
  • Process env (ALLOY_*) overrides defaults.
  • Context/use_config and configure(...) override env/defaults.
  • Per-call overrides (e.g., ask(..., model=...)) override everything above.

Make targets

  • make setup — install dev deps and package in editable mode.
  • make test, make lint, make typecheck — CI-like checks.
  • make examples — runs examples/basic_usage.py and examples/tools_demo.py.
    • Optional offline: ALLOY_BACKEND=fake make examples

Troubleshooting

  • API key: Ensure OPENAI_API_KEY is set (process env or .env).
  • Model choice: Prefer gpt-5-mini for fastest latency; switch via configure(model=...) or ALLOY_MODEL.
  • Timeouts/slow runs: Reduce max_tokens, lower temperature, prefer smaller models, and cap tool loops.
  • Tool loops: Default limit is 2. Adjust via configure(max_tool_turns=...) or env ALLOY_MAX_TOOL_TURNS.
  • Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.

Observability

Integration tests

  • OpenAI: Set OPENAI_API_KEY (and optionally ALLOY_IT_MODEL, default gpt-5-mini). Run pytest -q — OpenAI integration tests auto-enable.
  • Anthropic: Set ANTHROPIC_API_KEY and ALLOY_IT_MODEL=claude-sonnet-4-20250514 (or another Claude like claude-3.7-sonnet). Run pytest -q — Anthropic integration tests auto-enable.
  • Gemini: Set GOOGLE_API_KEY and ALLOY_IT_MODEL=gemini-2.5-pro (or gemini-2.5-flash). Run pytest -q — Gemini integration tests auto-enable.
    • SDK note: Gemini support uses google-genai (GA).

How to run locally

  • Install providers bundle: pip install 'alloy-ai[providers]'
  • Create .env with OPENAI_API_KEY=...
  • Option A (no install):
    • python examples/basic_usage.py
    • python examples/tools_demo.py
    • (examples add src/ to sys.path for you)
  • Option B (editable install):
    • pip install -e '.[providers]'
    • Then run from anywhere.

.env example

OPENAI_API_KEY=sk-...

Support matrix (v1)

  • OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
  • Anthropic (Claude 3.7 / Sonnet 4 / Opus 4/4.1): completions and tool-calling loop (no streaming yet).
  • Google (Gemini 2.5 Pro/Flash): basic completions (no tools/streaming in scaffold). Uses google-genai by default.
  • Ollama (local): basic completions via model="ollama:<name>" (no tools/streaming in scaffold).
  • ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).

Install options

  • Base: pip install alloy-ai (includes OpenAI + python-dotenv).
  • All providers: pip install 'alloy-ai[providers]' (OpenAI, Anthropic, Gemini via google-genai, Ollama).
  • Specific extras: pip install 'alloy-ai[anthropic]', pip install 'alloy-ai[gemini]', pip install 'alloy-ai[ollama]'. Documentation
  • Full docs: https://lydakis.github.io/alloy/

Why Alloy vs X

  • Raw SDKs: Minimal glue, limited structure handling. Alloy provides typed outputs, provider routing, and a simple tool loop.
  • LangChain: Rich orchestration features and chains. Alloy stays minimal: Python functions that return typed results without introducing a new framework.
  • Instructor/Pydantic: Strong for OpenAI JSON typing. Alloy generalizes the idea across providers (OpenAI, Anthropic, Gemini), adds tools/retry/routing, and surfaces clear errors when structure cannot be enforced (with a single auto‑finalize turn on OpenAI when needed).
  • DSPy/Program synthesis: Optimizes pipelines and prompts. Alloy focuses on straightforward, production‑oriented building blocks: short prompts, typed outputs, and predictable defaults.
  • Guidance/templating: Emphasizes prompt templates. Alloy emphasizes typed commands and provider structured outputs with clear error handling.
  • Summary: Small API surface, provider‑agnostic backends, typed outputs by default, and optional tools — compose with normal Python.

LOC comparison (CSV → API payloads)

Raw SDK (conceptual):

import pandas as pd
from openai import OpenAI

client = OpenAI()
df = pd.read_csv("customers.csv")
messages = [{"role": "user", "content": f"Map data {df.head()} to {{fullName, emailAddress}}"}]
resp = client.chat.completions.create(model="gpt-4o", messages=messages)
payloads = json.loads(resp.choices[0].message.content)

Alloy:

from alloy import command
import pandas as pd

@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, example: str) -> str:
    return f"Map data {df.head()} to {example}"

payloads = csv_to_api(pd.read_csv("customers.csv"), "POST /customers {fullName, emailAddress}")

Releases

  • Changelog: CHANGELOG.md
  • Publishing: Create a tag like v0.1.1 on main — CI builds and uploads to PyPI (needs Trusted Publishing for alloy-ai or a configured token).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alloy_ai-0.2.1.tar.gz (29.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

alloy_ai-0.2.1-py3-none-any.whl (31.6 kB view details)

Uploaded Python 3

File details

Details for the file alloy_ai-0.2.1.tar.gz.

File metadata

  • Download URL: alloy_ai-0.2.1.tar.gz
  • Upload date:
  • Size: 29.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.2.1.tar.gz
Algorithm Hash digest
SHA256 521e881ad1886f79941b6fa4d5333e390ec7b10fd9b168c55b52e8878295bbaa
MD5 8d399591fe7009c4a11511cdb64bffff
BLAKE2b-256 9439544242625782402b0f2a151a72181f377b1caa2ae08e1f7c3743b7d53587

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.2.1.tar.gz:

Publisher: release.yml on lydakis/alloy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloy_ai-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: alloy_ai-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 31.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 39c4772a601bb571f65e28a259a9147f50a9f445548e0fba98d58cea21515c06
MD5 4cbdae66556a032bb5f7368709e68728
BLAKE2b-256 daac7f4a32c55fda048c7fba52239debf0c60ed3fc9e72604a27d481bf81a034

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.2.1-py3-none-any.whl:

Publisher: release.yml on lydakis/alloy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page