Skip to main content

Alloy (Python): Python for logic. English for intelligence.

Project description

Alloy (Python): Python for logic. English for intelligence.

CI Docs Docs Site PyPI License: MIT

License: MIT

This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.

Quick start

  • Install (all providers): pip install 'alloy-ai[providers]'
  • Or minimal (OpenAI only): pip install alloy-ai
  • Create .env with OPENAI_API_KEY=...
  • Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure

load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)

@command(output=float)
def ExtractPrice(text: str) -> str:
    """Extract price from text."""
    return f"Extract the price (number only) from: {text}"

print(ExtractPrice("This item costs $49.99."))
print(ask("Say hi"))

Enforcing outputs

Notes

  • OpenAI backend is implemented for sync/async/streaming.
  • Streaming with tools is not yet supported.
  • Structured outputs: Alloy uses provider JSON Schema features (OpenAI/Anthropic/Gemini) and prompt guardrails. See Enforcing outputs above.
  • Configuration defaults: Alloy uses model=gpt-5-mini if configure(...) is not called. You can also set process environment variables instead of a .env file:
    • ALLOY_MODEL, ALLOY_TEMPERATURE, ALLOY_MAX_TOKENS, ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM, ALLOY_RETRY.
    • Example: export ALLOY_MODEL=gpt-4o then run your script.

Examples

  • See examples/basic_usage.py and examples/tools_demo.py (tools + contracts).

Offline mode (dev only)

  • To run examples without network/API keys, set ALLOY_BACKEND=fake.
  • Example: ALLOY_BACKEND=fake python examples/basic_usage.py

Config precedence

  • Defaults: model=gpt-5-mini, max_tool_turns=2.
  • Process env (ALLOY_*) overrides defaults.
  • Context/use_config and configure(...) override env/defaults.
  • Per-call overrides (e.g., ask(..., model=...)) override everything above.

Make targets

  • make setup — install dev deps and package in editable mode.
  • make test, make lint, make typecheck — CI-like checks.
  • make examples — runs examples/basic_usage.py and examples/tools_demo.py.
    • Tip: ALLOY_BACKEND=fake make examples to run offline.

Troubleshooting

  • API key: Ensure OPENAI_API_KEY is set (process env or .env).
  • Model choice: Prefer gpt-5-mini for fastest latency; switch via configure(model=...) or ALLOY_MODEL.
  • Timeouts/slow runs: Reduce max_tokens, lower temperature, prefer smaller models, and cap tool loops.
  • Tool loops: Alloy caps tool iterations by default (max_tool_turns=2). Adjust via configure(max_tool_turns=1) or env ALLOY_MAX_TOOL_TURNS.
  • Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.

Integration tests

  • OpenAI: Set OPENAI_API_KEY (and optionally ALLOY_IT_MODEL, default gpt-5-mini). Run pytest -q — OpenAI integration tests auto-enable.
  • Anthropic: Set ANTHROPIC_API_KEY and ALLOY_IT_MODEL=claude-3.5-sonnet (or another Claude). Run pytest -q — Anthropic integration tests auto-enable.
  • Gemini: Set GOOGLE_API_KEY and ALLOY_IT_MODEL=gemini-1.5-pro (or another Gemini). Run pytest -q — Gemini integration tests auto-enable.
    • SDK note: Gemini support uses google-genai (GA).

How to run locally

  • Install providers bundle: pip install 'alloy-ai[providers]'
  • Create .env with OPENAI_API_KEY=...
  • Option A (no install):
    • python examples/basic_usage.py
    • python examples/tools_demo.py
    • (examples add src/ to sys.path for you)
  • Option B (editable install):
    • pip install -e '.[providers]'
    • Then run from anywhere.

.env example

OPENAI_API_KEY=sk-...

Support matrix (v1)

  • OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
  • Anthropic (Claude 3/3.5): completions and tool-calling loop (no streaming yet).
  • Google (Gemini 1.5): basic completions (no tools/streaming in scaffold). Uses google-genai by default.
  • Ollama (local): basic completions via model="ollama:<name>" (no tools/streaming in scaffold).
  • ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).

Install options

  • Base: pip install alloy-ai (includes OpenAI + python-dotenv).
  • All providers: pip install 'alloy-ai[providers]' (OpenAI, Anthropic, Gemini via google-genai, Ollama).
  • Specific extras: pip install 'alloy-ai[anthropic]', pip install 'alloy-ai[gemini]', pip install 'alloy-ai[ollama]'. Documentation
  • Full docs: https://lydakis.github.io/alloy-py/

Releases

  • Changelog: CHANGELOG.md
  • Publishing: Create a tag like v0.1.1 on main — CI builds and uploads to PyPI (needs Trusted Publishing for alloy-ai or a configured token).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alloy_ai-0.1.3.tar.gz (19.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

alloy_ai-0.1.3-py3-none-any.whl (23.1 kB view details)

Uploaded Python 3

File details

Details for the file alloy_ai-0.1.3.tar.gz.

File metadata

  • Download URL: alloy_ai-0.1.3.tar.gz
  • Upload date:
  • Size: 19.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.3.tar.gz
Algorithm Hash digest
SHA256 62a48d6939e01de69c970824eca64da218d8696313633f847345c1bed956ae08
MD5 3e8b7ea454c89552a50db6bc4ed7bd17
BLAKE2b-256 0ecaccd0ca745a42750d4091a0ebca888207cce94d47c941cb849dcb0cda6251

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.3.tar.gz:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloy_ai-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: alloy_ai-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 23.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 6e9e4fccaff22fee1198b03eced0d374a4889dc30ce8b550445f6988e226ade0
MD5 a3b38e3634c471e97d81927db7a2e1ca
BLAKE2b-256 ccc23c046e477b344493c76e561b2d68d8c543ce45b3aaf66c9de0123adb3e20

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.3-py3-none-any.whl:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page