Skip to main content

Alloy (Python): Python for logic. English for intelligence.

Project description

Alloy (Python): Python for logic. English for intelligence.

Docs PyPI License: MIT

License: MIT

This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.

Quick start

  • Install (all providers): pip install 'alloy-ai[providers]'
  • Or minimal (OpenAI only): pip install alloy-ai
  • Create .env with OPENAI_API_KEY=...
  • Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure

load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)

@command(output=float)
def ExtractPrice(text: str) -> str:
    """Extract price from text."""
    return f"Extract the price (number only) from: {text}"

print(ExtractPrice("This item costs $49.99."))
print(ask("Say hi"))

Notes

  • OpenAI backend is implemented for sync/async/streaming.
  • Streaming with tools is not yet supported.
  • For structured outputs, Alloy attempts to use OpenAI structured responses (JSON schema). If unavailable, the model may still return JSON, which Alloy parses best-effort.
  • Configuration defaults: Alloy uses model=gpt-5-mini if configure(...) is not called. You can also set process environment variables instead of a .env file:
    • ALLOY_MODEL, ALLOY_TEMPERATURE, ALLOY_MAX_TOKENS, ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM, ALLOY_RETRY.
    • Example: export ALLOY_MODEL=gpt-4o then run your script.

Examples

  • See examples/basic_usage.py and examples/tools_demo.py (tools + contracts).

Offline mode (dev only)

  • To run examples without network/API keys, set ALLOY_BACKEND=fake.
  • Example: ALLOY_BACKEND=fake python examples/basic_usage.py

Config precedence

  • Defaults: model=gpt-5-mini, max_tool_turns=2.
  • Process env (ALLOY_*) overrides defaults.
  • Context/use_config and configure(...) override env/defaults.
  • Per-call overrides (e.g., ask(..., model=...)) override everything above.

Make targets

  • make setup — install dev deps and package in editable mode.
  • make test, make lint, make typecheck — CI-like checks.
  • make examples — runs examples/basic_usage.py and examples/tools_demo.py.
    • Tip: ALLOY_BACKEND=fake make examples to run offline.

Troubleshooting

  • API key: Ensure OPENAI_API_KEY is set (process env or .env).
  • Model choice: Prefer gpt-5-mini for fastest latency; switch via configure(model=...) or ALLOY_MODEL.
  • Timeouts/slow runs: Reduce max_tokens, lower temperature, prefer smaller models, and cap tool loops.
  • Tool loops: Alloy caps tool iterations by default (max_tool_turns=2). Adjust via configure(max_tool_turns=1) or env ALLOY_MAX_TOOL_TURNS.
  • Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.

Integration tests

  • OpenAI: Set OPENAI_API_KEY (and optionally ALLOY_IT_MODEL, default gpt-5-mini). Run pytest -q — OpenAI integration tests auto-enable.
  • Anthropic: Set ANTHROPIC_API_KEY and ALLOY_IT_MODEL=claude-3.5-sonnet (or another Claude). Run pytest -q — Anthropic integration tests auto-enable.
  • Gemini: Set GOOGLE_API_KEY and ALLOY_IT_MODEL=gemini-1.5-pro (or another Gemini). Run pytest -q — Gemini integration tests auto-enable.
    • SDK note: Gemini support uses google-genai (GA).

How to run locally

  • Install providers bundle: pip install 'alloy-ai[providers]'
  • Create .env with OPENAI_API_KEY=...
  • Option A (no install):
    • python examples/basic_usage.py
    • python examples/tools_demo.py
    • (examples add src/ to sys.path for you)
  • Option B (editable install):
    • pip install -e '.[providers]'
    • Then run from anywhere.

.env example

OPENAI_API_KEY=sk-...

Support matrix (v1)

  • OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
  • Anthropic (Claude 3/3.5): completions and tool-calling loop (no streaming yet).
  • Google (Gemini 1.5): basic completions (no tools/streaming in scaffold). Uses google-genai by default; google-generativeai supported via legacy extra.
  • Ollama (local): basic completions via model="ollama:<name>" (no tools/streaming in scaffold).
  • ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).

Install options

  • Base: pip install alloy-ai (includes OpenAI + python-dotenv).
  • All providers: pip install 'alloy-ai[providers]' (OpenAI, Anthropic, Gemini via google-genai, Ollama).
  • Specific extras: pip install 'alloy-ai[anthropic]', pip install 'alloy-ai[gemini]', pip install 'alloy-ai[ollama]'. Documentation
  • Full docs: https://openai.github.io/alloy-py/

Releases

  • Changelog: CHANGELOG.md
  • Publishing: Create a tag like v0.1.1 on main — CI builds and uploads to PyPI (needs Trusted Publishing for alloy-ai or a configured token).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alloy_ai-0.1.1.tar.gz (19.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

alloy_ai-0.1.1-py3-none-any.whl (20.4 kB view details)

Uploaded Python 3

File details

Details for the file alloy_ai-0.1.1.tar.gz.

File metadata

  • Download URL: alloy_ai-0.1.1.tar.gz
  • Upload date:
  • Size: 19.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.1.tar.gz
Algorithm Hash digest
SHA256 4158a276600743ff7344ecd85fc2b7800a59d759e04d6ca25ceb6586c203bf43
MD5 03b782a57198c0f34446a61f0e085047
BLAKE2b-256 8a725594315a8088ceb4d74fdfaa8f3c61bbc126b8f67a3717b3ee602a56c67e

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.1.tar.gz:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloy_ai-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: alloy_ai-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 20.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9db2f5f3e71bd007376b158a836ede4801b3b8fd79f66d07dcf343650f3b75ae
MD5 330ec2460802cdb195967f4183cb2082
BLAKE2b-256 b8a004cf927582e95021295e93cff8a786c4a4c3bc9c251d0a80f95813981276

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.1-py3-none-any.whl:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page