Alloy (Python): Python for logic. English for intelligence.
Project description
Alloy (Python)
Python for logic. English for intelligence.
Alloy lets you write typed AI functions. Decorate a Python function with
@command(output=MyType), call any supported model, and get a MyType back —
enforced via provider‑native structured outputs. Add Python tools with
design‑by‑contract to keep agent loops reliable.
License: MIT
This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.
Stability: OpenAI backend is Stable; Anthropic is Beta; Gemini/Ollama are Experimental. “Stable” isn’t bug‑free — issues are tracked and fixed as prioritized.
Example: CSV to API
from alloy import command
import pandas as pd
@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, endpoint_example: str) -> str:
"""Intelligently map CSV columns to API format."""
return f"Map this data {df.head()} to match API: {endpoint_example}"
df = pd.read_csv("messy_customer_data.csv")
api_calls = csv_to_api(df, "POST /customers {fullName, emailAddress, subscriptionTier}")
for payload in api_calls:
requests.post("https://api.your-saas.com/customers", json=payload)
This example maps rows from a DataFrame into request payloads for an API, returning a typed list[dict] you can post.
Quick start
- Install (all providers):
pip install 'alloy-ai[providers]' - Or minimal (OpenAI only):
pip install alloy-ai - Create
.envwithOPENAI_API_KEY=... - Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure
load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)
@command(output=float)
def extract_price(text: str) -> str:
"""Extract price from text."""
return f"Extract the price (number only) from: {text}"
print(extract_price("This item costs $49.99."))
print(ask("Say hi"))
Enforcing outputs
- Alloy uses provider‑native structured outputs (JSON Schema) to enforce the expected shape. If parsing fails, you get a clear, typed error.
- Docs: https://lydakis.github.io/alloy/outputs/
Design by Contract (tools)
from alloy import tool, require, ensure
@tool
@require(lambda ba: "validated_at" in ba.arguments["data"], "run validate_data first")
@ensure(lambda ok: ok is True, "save must succeed")
def save_to_production(data: dict) -> bool:
return True
Contract failures surface as tool output, allowing the model to self‑correct.
Progressive path
- Start exploratory:
ask("...") - Add a command:
@command→ returnsstr - Enforce types:
@command(output=T) - Add tools + DBC:
@command(output=T, tools=[...])with@require/@ensure
Notes
- OpenAI backend is implemented for sync/async/streaming.
- Streaming with tools is not yet supported.
- Structured outputs: Alloy uses provider JSON Schema features (OpenAI/Anthropic/Gemini). See Enforcing outputs above.
- Configuration defaults: Alloy uses
model=gpt-5-miniifconfigure(...)is not called. You can also set process environment variables instead of a.envfile:ALLOY_MODEL,ALLOY_TEMPERATURE,ALLOY_MAX_TOKENS,ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM,ALLOY_RETRY,ALLOY_MAX_TOOL_TURNS.- Example:
export ALLOY_MODEL=gpt-4othen run your script.
- Output types today: primitives and dataclasses (strict mode); TypedDict outputs planned.
- OpenAI strict mode: if a tool loop completes without a final structured output, Alloy issues one follow‑up turn (no tools) to finalize; then raises if still missing.
Examples
- See
examples/basic_usage.py,examples/tools_demo.py(tools + contracts), andexamples/csv_to_api.py.
Optional: offline dev tip
- For local demos without network/API keys, set
ALLOY_BACKEND=fake(not for production). - Example:
ALLOY_BACKEND=fake python examples/basic_usage.py
Config precedence
- Defaults:
model=gpt-5-mini,max_tool_turns=2(safe default). - Process env (ALLOY_*) overrides defaults.
- Context/use_config and
configure(...)override env/defaults. - Per-call overrides (e.g.,
ask(..., model=...)) override everything above.
Make targets
make setup— install dev deps and package in editable mode.make test,make lint,make typecheck— CI-like checks.make examples— runsexamples/basic_usage.pyandexamples/tools_demo.py.- Optional offline:
ALLOY_BACKEND=fake make examples
- Optional offline:
Troubleshooting
- API key: Ensure
OPENAI_API_KEYis set (process env or.env). - Model choice: Prefer
gpt-5-minifor fastest latency; switch viaconfigure(model=...)orALLOY_MODEL. - Timeouts/slow runs: Reduce
max_tokens, lowertemperature, prefer smaller models, and cap tool loops. - Tool loops: Default limit is 2. Adjust via
configure(max_tool_turns=...)or envALLOY_MAX_TOOL_TURNS. - Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.
Observability
- See simple patterns in the docs: https://docs.alloy.fyi/observability/
Integration tests
- OpenAI: Set
OPENAI_API_KEY(and optionallyALLOY_IT_MODEL, defaultgpt-5-mini). Runpytest -q— OpenAI integration tests auto-enable. - Anthropic: Set
ANTHROPIC_API_KEYandALLOY_IT_MODEL=claude-sonnet-4-20250514(or another Claude likeclaude-3.7-sonnet). Runpytest -q— Anthropic integration tests auto-enable. - Gemini: Set
GOOGLE_API_KEYandALLOY_IT_MODEL=gemini-2.5-pro(orgemini-2.5-flash). Runpytest -q— Gemini integration tests auto-enable.- SDK note: Gemini support uses
google-genai(GA).
- SDK note: Gemini support uses
How to run locally
- Install providers bundle:
pip install 'alloy-ai[providers]' - Create
.envwithOPENAI_API_KEY=... - Option A (no install):
python examples/basic_usage.pypython examples/tools_demo.py- (examples add
src/tosys.pathfor you)
- Option B (editable install):
pip install -e '.[providers]'- Then run from anywhere.
.env example
OPENAI_API_KEY=sk-...
Support matrix (v1)
- OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
- Anthropic (Claude 3.7 / Sonnet 4 / Opus 4/4.1): completions and tool-calling loop (no streaming yet).
- Google (Gemini 2.5 Pro/Flash): basic completions (no tools/streaming in scaffold). Uses
google-genaiby default. - Ollama (local): basic completions via
model="ollama:<name>"(no tools/streaming in scaffold). - ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).
Install options
- Base:
pip install alloy-ai(includes OpenAI + python-dotenv). - All providers:
pip install 'alloy-ai[providers]'(OpenAI, Anthropic, Gemini viagoogle-genai, Ollama). - Specific extras:
pip install 'alloy-ai[anthropic]',pip install 'alloy-ai[gemini]',pip install 'alloy-ai[ollama]'. Documentation - Full docs: https://lydakis.github.io/alloy/
Why Alloy vs X
- Raw SDKs: Minimal glue, limited structure handling. Alloy provides typed outputs, provider routing, and a simple tool loop.
- LangChain: Rich orchestration features and chains. Alloy stays minimal: Python functions that return typed results without introducing a new framework.
- Instructor/Pydantic: Strong for OpenAI JSON typing. Alloy generalizes the idea across providers (OpenAI, Anthropic, Gemini), adds tools/retry/routing, and surfaces clear errors when structure cannot be enforced (with a single auto‑finalize turn on OpenAI when needed).
- DSPy/Program synthesis: Optimizes pipelines and prompts. Alloy focuses on straightforward, production‑oriented building blocks: short prompts, typed outputs, and predictable defaults.
- Guidance/templating: Emphasizes prompt templates. Alloy emphasizes typed commands and provider structured outputs with clear error handling.
- Summary: Small API surface, provider‑agnostic backends, typed outputs by default, and optional tools — compose with normal Python.
LOC comparison (CSV → API payloads)
Raw SDK (conceptual):
import pandas as pd
from openai import OpenAI
client = OpenAI()
df = pd.read_csv("customers.csv")
messages = [{"role": "user", "content": f"Map data {df.head()} to {{fullName, emailAddress}}"}]
resp = client.chat.completions.create(model="gpt-4o", messages=messages)
payloads = json.loads(resp.choices[0].message.content)
Alloy:
from alloy import command
import pandas as pd
@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, example: str) -> str:
return f"Map data {df.head()} to {example}"
payloads = csv_to_api(pd.read_csv("customers.csv"), "POST /customers {fullName, emailAddress}")
Releases
- Changelog: CHANGELOG.md
- Publishing: Create a tag like
v0.1.1on main — CI builds and uploads to PyPI (needs Trusted Publishing foralloy-aior a configured token).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file alloy_ai-0.2.1.tar.gz.
File metadata
- Download URL: alloy_ai-0.2.1.tar.gz
- Upload date:
- Size: 29.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
521e881ad1886f79941b6fa4d5333e390ec7b10fd9b168c55b52e8878295bbaa
|
|
| MD5 |
8d399591fe7009c4a11511cdb64bffff
|
|
| BLAKE2b-256 |
9439544242625782402b0f2a151a72181f377b1caa2ae08e1f7c3743b7d53587
|
Provenance
The following attestation bundles were made for alloy_ai-0.2.1.tar.gz:
Publisher:
release.yml on lydakis/alloy
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alloy_ai-0.2.1.tar.gz -
Subject digest:
521e881ad1886f79941b6fa4d5333e390ec7b10fd9b168c55b52e8878295bbaa - Sigstore transparency entry: 433001034
- Sigstore integration time:
-
Permalink:
lydakis/alloy@03ae74883eae034d053023a5541dff5b9457e908 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/lydakis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@03ae74883eae034d053023a5541dff5b9457e908 -
Trigger Event:
push
-
Statement type:
File details
Details for the file alloy_ai-0.2.1-py3-none-any.whl.
File metadata
- Download URL: alloy_ai-0.2.1-py3-none-any.whl
- Upload date:
- Size: 31.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
39c4772a601bb571f65e28a259a9147f50a9f445548e0fba98d58cea21515c06
|
|
| MD5 |
4cbdae66556a032bb5f7368709e68728
|
|
| BLAKE2b-256 |
daac7f4a32c55fda048c7fba52239debf0c60ed3fc9e72604a27d481bf81a034
|
Provenance
The following attestation bundles were made for alloy_ai-0.2.1-py3-none-any.whl:
Publisher:
release.yml on lydakis/alloy
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alloy_ai-0.2.1-py3-none-any.whl -
Subject digest:
39c4772a601bb571f65e28a259a9147f50a9f445548e0fba98d58cea21515c06 - Sigstore transparency entry: 433001042
- Sigstore integration time:
-
Permalink:
lydakis/alloy@03ae74883eae034d053023a5541dff5b9457e908 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/lydakis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@03ae74883eae034d053023a5541dff5b9457e908 -
Trigger Event:
push
-
Statement type: