Alloy (Python): Python for logic. English for intelligence.
Project description
Alloy (Python)
Python for logic. English for intelligence.
Alloy helps you integrate intelligence into deterministic programs with typed Python functions.
Declare a function with @command(output=...), and Alloy routes to an LLM provider, enforces
structured outputs, and returns typed results you can depend on.
License: MIT
This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.
Example: CSV to API
from alloy import command
import pandas as pd
@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, endpoint_example: str) -> str:
"""Intelligently map CSV columns to API format."""
return f"Map this data {df.head()} to match API: {endpoint_example}"
df = pd.read_csv("messy_customer_data.csv")
api_calls = csv_to_api(df, "POST /customers {fullName, emailAddress, subscriptionTier}")
for payload in api_calls:
requests.post("https://api.your-saas.com/customers", json=payload)
This example maps rows from a DataFrame into request payloads for an API, returning a typed list[dict] you can post.
Quick start
- Install (all providers):
pip install 'alloy-ai[providers]' - Or minimal (OpenAI only):
pip install alloy-ai - Create
.envwithOPENAI_API_KEY=... - Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure
load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)
@command(output=float)
def extract_price(text: str) -> str:
"""Extract price from text."""
return f"Extract the price (number only) from: {text}"
print(extract_price("This item costs $49.99."))
print(ask("Say hi"))
Enforcing outputs
- Alloy biases models to return the expected shape and uses provider structured outputs where available. If parsing still fails, you get a clear error.
- Docs: https://lydakis.github.io/alloy/outputs/
Notes
- OpenAI backend is implemented for sync/async/streaming.
- Streaming with tools is not yet supported.
- Structured outputs: Alloy uses provider JSON Schema features (OpenAI/Anthropic/Gemini) and prompt guardrails. See Enforcing outputs above.
- Configuration defaults: Alloy uses
model=gpt-5-miniifconfigure(...)is not called. You can also set process environment variables instead of a.envfile:ALLOY_MODEL,ALLOY_TEMPERATURE,ALLOY_MAX_TOKENS,ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM,ALLOY_RETRY.- Example:
export ALLOY_MODEL=gpt-4othen run your script.
Examples
- See
examples/basic_usage.pyandexamples/tools_demo.py(tools + contracts).
Offline mode (dev only)
- To run examples without network/API keys, set
ALLOY_BACKEND=fake. - Example:
ALLOY_BACKEND=fake python examples/basic_usage.py
Config precedence
- Defaults:
model=gpt-5-mini,max_tool_turns=2. - Process env (ALLOY_*) overrides defaults.
- Context/use_config and
configure(...)override env/defaults. - Per-call overrides (e.g.,
ask(..., model=...)) override everything above.
Make targets
make setup— install dev deps and package in editable mode.make test,make lint,make typecheck— CI-like checks.make examples— runsexamples/basic_usage.pyandexamples/tools_demo.py.- Tip:
ALLOY_BACKEND=fake make examplesto run offline.
- Tip:
Troubleshooting
- API key: Ensure
OPENAI_API_KEYis set (process env or.env). - Model choice: Prefer
gpt-5-minifor fastest latency; switch viaconfigure(model=...)orALLOY_MODEL. - Timeouts/slow runs: Reduce
max_tokens, lowertemperature, prefer smaller models, and cap tool loops. - Tool loops: Alloy caps tool iterations by default (
max_tool_turns=2). Adjust viaconfigure(max_tool_turns=1)or envALLOY_MAX_TOOL_TURNS. - Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.
Integration tests
- OpenAI: Set
OPENAI_API_KEY(and optionallyALLOY_IT_MODEL, defaultgpt-5-mini). Runpytest -q— OpenAI integration tests auto-enable. - Anthropic: Set
ANTHROPIC_API_KEYandALLOY_IT_MODEL=claude-sonnet-4-20250514(or another Claude likeclaude-3.7-sonnet). Runpytest -q— Anthropic integration tests auto-enable. - Gemini: Set
GOOGLE_API_KEYandALLOY_IT_MODEL=gemini-2.5-pro(orgemini-2.5-flash). Runpytest -q— Gemini integration tests auto-enable.- SDK note: Gemini support uses
google-genai(GA).
- SDK note: Gemini support uses
How to run locally
- Install providers bundle:
pip install 'alloy-ai[providers]' - Create
.envwithOPENAI_API_KEY=... - Option A (no install):
python examples/basic_usage.pypython examples/tools_demo.py- (examples add
src/tosys.pathfor you)
- Option B (editable install):
pip install -e '.[providers]'- Then run from anywhere.
.env example
OPENAI_API_KEY=sk-...
Support matrix (v1)
- OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
- Anthropic (Claude 3.7 / Sonnet 4 / Opus 4/4.1): completions and tool-calling loop (no streaming yet).
- Google (Gemini 2.5 Pro/Flash): basic completions (no tools/streaming in scaffold). Uses
google-genaiby default. - Ollama (local): basic completions via
model="ollama:<name>"(no tools/streaming in scaffold). - ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).
Install options
- Base:
pip install alloy-ai(includes OpenAI + python-dotenv). - All providers:
pip install 'alloy-ai[providers]'(OpenAI, Anthropic, Gemini viagoogle-genai, Ollama). - Specific extras:
pip install 'alloy-ai[anthropic]',pip install 'alloy-ai[gemini]',pip install 'alloy-ai[ollama]'. Documentation - Full docs: https://lydakis.github.io/alloy/
Why Alloy vs X
- Raw SDKs: Minimal glue, limited structure handling. Alloy provides typed outputs, provider routing, and a simple tool loop.
- LangChain: Rich orchestration features and chains. Alloy stays minimal: Python functions that return typed results without introducing a new framework.
- Instructor/Pydantic: Strong for OpenAI JSON typing. Alloy generalizes the idea across providers (OpenAI, Anthropic, Gemini), adds tools/retry/routing, and degrades gracefully when structure is not enforced.
- DSPy/Program synthesis: Optimizes pipelines and prompts. Alloy focuses on straightforward, production‑oriented building blocks: short prompts, typed outputs, and predictable defaults.
- Guidance/templating: Emphasizes prompt templates. Alloy emphasizes typed commands and provider structured outputs with clear error handling.
- Summary: Small API surface, provider‑agnostic backends, typed outputs by default, and optional tools — compose with normal Python.
LOC comparison (CSV → API payloads)
Raw SDK (conceptual):
import pandas as pd
from openai import OpenAI
client = OpenAI()
df = pd.read_csv("customers.csv")
messages = [{"role": "user", "content": f"Map data {df.head()} to {{fullName, emailAddress}}"}]
resp = client.chat.completions.create(model="gpt-4o", messages=messages)
payloads = json.loads(resp.choices[0].message.content)
Alloy:
from alloy import command
import pandas as pd
@command(output=list[dict])
def csv_to_api(df: pd.DataFrame, example: str) -> str:
return f"Map data {df.head()} to {example}"
payloads = csv_to_api(pd.read_csv("customers.csv"), "POST /customers {fullName, emailAddress}")
Releases
- Changelog: CHANGELOG.md
- Publishing: Create a tag like
v0.1.1on main — CI builds and uploads to PyPI (needs Trusted Publishing foralloy-aior a configured token).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file alloy_ai-0.1.4.tar.gz.
File metadata
- Download URL: alloy_ai-0.1.4.tar.gz
- Upload date:
- Size: 22.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5dfab92308e3400a34f6db221d01643ba34d9a1318c5107494867408c81847e6
|
|
| MD5 |
1906c7320b88c26fa5554e72aca07cc8
|
|
| BLAKE2b-256 |
ebf0b648f135add83f7026364914c2272dd6d78d21979164b5ff7694dfa863b9
|
Provenance
The following attestation bundles were made for alloy_ai-0.1.4.tar.gz:
Publisher:
release.yml on lydakis/alloy
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alloy_ai-0.1.4.tar.gz -
Subject digest:
5dfab92308e3400a34f6db221d01643ba34d9a1318c5107494867408c81847e6 - Sigstore transparency entry: 397462623
- Sigstore integration time:
-
Permalink:
lydakis/alloy@e7f082def4c0cb078617a85f83c2372b3ab3518d -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/lydakis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@e7f082def4c0cb078617a85f83c2372b3ab3518d -
Trigger Event:
push
-
Statement type:
File details
Details for the file alloy_ai-0.1.4-py3-none-any.whl.
File metadata
- Download URL: alloy_ai-0.1.4-py3-none-any.whl
- Upload date:
- Size: 24.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
557ba6b588197bad125a82b407978f0a1a5f7991815f26b68e5fcdc92ac05008
|
|
| MD5 |
1c72235ca6baf32320ce548146cff671
|
|
| BLAKE2b-256 |
ffbbd3fb5b37d57288d7e0c4b6c46c44c47c20fb4af258967fc882680ba79175
|
Provenance
The following attestation bundles were made for alloy_ai-0.1.4-py3-none-any.whl:
Publisher:
release.yml on lydakis/alloy
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alloy_ai-0.1.4-py3-none-any.whl -
Subject digest:
557ba6b588197bad125a82b407978f0a1a5f7991815f26b68e5fcdc92ac05008 - Sigstore transparency entry: 397462631
- Sigstore integration time:
-
Permalink:
lydakis/alloy@e7f082def4c0cb078617a85f83c2372b3ab3518d -
Branch / Tag:
refs/tags/v0.1.4 - Owner: https://github.com/lydakis
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@e7f082def4c0cb078617a85f83c2372b3ab3518d -
Trigger Event:
push
-
Statement type: