Skip to main content

Alloy (Python): Python for logic. English for intelligence.

Project description

Alloy (Python): Python for logic. English for intelligence.

CI Docs PyPI License: MIT

License: MIT

This repository contains an early scaffold of the Alloy library per alloy-spec-v1.md.

Quick start

  • Install (all providers): pip install 'alloy-ai[providers]'
  • Or minimal (OpenAI only): pip install alloy-ai
  • Create .env with OPENAI_API_KEY=...
  • Use the API:
from dataclasses import dataclass
from dotenv import load_dotenv
from alloy import command, ask, configure

load_dotenv()
# Optional: configure() — default model is `gpt-5-mini` if omitted
# configure(model="gpt-5-mini", temperature=0.7)

@command(output=float)
def ExtractPrice(text: str) -> str:
    """Extract price from text."""
    return f"Extract the price (number only) from: {text}"

print(ExtractPrice("This item costs $49.99."))
print(ask("Say hi"))

Notes

  • OpenAI backend is implemented for sync/async/streaming.
  • Streaming with tools is not yet supported.
  • For structured outputs, Alloy attempts to use OpenAI structured responses (JSON schema). If unavailable, the model may still return JSON, which Alloy parses best-effort.
  • Configuration defaults: Alloy uses model=gpt-5-mini if configure(...) is not called. You can also set process environment variables instead of a .env file:
    • ALLOY_MODEL, ALLOY_TEMPERATURE, ALLOY_MAX_TOKENS, ALLOY_SYSTEM/ALLOY_DEFAULT_SYSTEM, ALLOY_RETRY.
    • Example: export ALLOY_MODEL=gpt-4o then run your script.

Examples

  • See examples/basic_usage.py and examples/tools_demo.py (tools + contracts).

Offline mode (dev only)

  • To run examples without network/API keys, set ALLOY_BACKEND=fake.
  • Example: ALLOY_BACKEND=fake python examples/basic_usage.py

Config precedence

  • Defaults: model=gpt-5-mini, max_tool_turns=2.
  • Process env (ALLOY_*) overrides defaults.
  • Context/use_config and configure(...) override env/defaults.
  • Per-call overrides (e.g., ask(..., model=...)) override everything above.

Make targets

  • make setup — install dev deps and package in editable mode.
  • make test, make lint, make typecheck — CI-like checks.
  • make examples — runs examples/basic_usage.py and examples/tools_demo.py.
    • Tip: ALLOY_BACKEND=fake make examples to run offline.

Troubleshooting

  • API key: Ensure OPENAI_API_KEY is set (process env or .env).
  • Model choice: Prefer gpt-5-mini for fastest latency; switch via configure(model=...) or ALLOY_MODEL.
  • Timeouts/slow runs: Reduce max_tokens, lower temperature, prefer smaller models, and cap tool loops.
  • Tool loops: Alloy caps tool iterations by default (max_tool_turns=2). Adjust via configure(max_tool_turns=1) or env ALLOY_MAX_TOOL_TURNS.
  • Rate limits (429): Shorten prompts/outputs, add retries with backoff, or use lower-throughput settings.

Integration tests

  • OpenAI: Set OPENAI_API_KEY (and optionally ALLOY_IT_MODEL, default gpt-5-mini). Run pytest -q — OpenAI integration tests auto-enable.
  • Anthropic: Set ANTHROPIC_API_KEY and ALLOY_IT_MODEL=claude-3.5-sonnet (or another Claude). Run pytest -q — Anthropic integration tests auto-enable.
  • Gemini: Set GOOGLE_API_KEY and ALLOY_IT_MODEL=gemini-1.5-pro (or another Gemini). Run pytest -q — Gemini integration tests auto-enable.
    • SDK note: Gemini support uses google-genai (GA).

How to run locally

  • Install providers bundle: pip install 'alloy-ai[providers]'
  • Create .env with OPENAI_API_KEY=...
  • Option A (no install):
    • python examples/basic_usage.py
    • python examples/tools_demo.py
    • (examples add src/ to sys.path for you)
  • Option B (editable install):
    • pip install -e '.[providers]'
    • Then run from anywhere.

.env example

OPENAI_API_KEY=sk-...

Support matrix (v1)

  • OpenAI (GPT-4/5 and o-series): completions, typed commands, ask, streaming (no tools in stream), tool-calling, structured JSON for object schemas, tool-loop cap.
  • Anthropic (Claude 3/3.5): completions and tool-calling loop (no streaming yet).
  • Google (Gemini 1.5): basic completions (no tools/streaming in scaffold). Uses google-genai by default.
  • Ollama (local): basic completions via model="ollama:<name>" (no tools/streaming in scaffold).
  • ReAct fallback: not implemented yet (planned for local models/LLMs without native tools).

Install options

  • Base: pip install alloy-ai (includes OpenAI + python-dotenv).
  • All providers: pip install 'alloy-ai[providers]' (OpenAI, Anthropic, Gemini via google-genai, Ollama).
  • Specific extras: pip install 'alloy-ai[anthropic]', pip install 'alloy-ai[gemini]', pip install 'alloy-ai[ollama]'. Documentation
  • Full docs: https://lydakis.github.io/alloy-py/

Releases

  • Changelog: CHANGELOG.md
  • Publishing: Create a tag like v0.1.1 on main — CI builds and uploads to PyPI (needs Trusted Publishing for alloy-ai or a configured token).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alloy_ai-0.1.2.tar.gz (20.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

alloy_ai-0.1.2-py3-none-any.whl (20.7 kB view details)

Uploaded Python 3

File details

Details for the file alloy_ai-0.1.2.tar.gz.

File metadata

  • Download URL: alloy_ai-0.1.2.tar.gz
  • Upload date:
  • Size: 20.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.2.tar.gz
Algorithm Hash digest
SHA256 97a7c0305b8531eaf7688995292cb402d8418743893f854375a25b20e1f0fa83
MD5 bf4d1683ec5cff45e5f88a754a1a0cc3
BLAKE2b-256 9d4be70862705bfb063e95109eaf66b6e5a6c6734dcfb6f1d1dee1c6a48ecb99

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.2.tar.gz:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloy_ai-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: alloy_ai-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 20.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for alloy_ai-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9af64a207d219f0e78f1aaec50699726adf8afb61c07ab3165398c06834a1bb1
MD5 72494435d867cf183badfc2338101bee
BLAKE2b-256 68f520d900de5c4d44ddaa63da8d9e89c778c47fd825ab947126f54e41b56392

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloy_ai-0.1.2-py3-none-any.whl:

Publisher: release.yml on lydakis/alloy-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page