Skip to main content

A modern agent compiler for building and executing LLM-powered agents

Project description

BLAST Logo

The agent compiler framework

Documentation Discord Twitter Follow

A1 is a new kind of agent framework. It takes an Agent (a set of tools and a description) and compiles either AOT (ahead-of-time) into a Tool or JIT (just-in-time) for immediate execution optimized for each unique agent input.

uv pip install a1-compiler
# or
pip install a1-compiler

🏎️ Why use an agent compiler?

An agent compiler is a direct replacement for agent frameworks such as Langchain or aisdk, where you define an Agent and run. The diference is:

  1. Safety: A1 generates code for each unique agent input, optimizing constantly to shrink the prompt injection attack surface.
  2. Speed: A1 makes codegen practical for tool-wielding agents with aggressive parallelism and static checking.
  3. Determinism: A1 optimizes for determinism via an engineered cost function. For example, it may replace an LLM call with a fast RegEx but may revert on-the-fly if a tool's schema evolves.
  4. Flexibility A tool in A1 can be instantly constructed from an OpenAPI document, an MCP server, a DB connection string, an fsspec path, a Python function, a Python package, or even just a documentation website URL.

Agent compilers emerged from frustration with the MCP protocol and SOTA agent frameworks where every agent runs a static while loop program. Slow, unsafe, and highly nondeterministic.

An agent compiler can perform the same while loop (just set Verify=IsLoop()) but has the freedom to explore superoptimal execution plans, while subject to engineered constraints (e.g. type-safety).

Ultimately the goal is "determinism-maxing": specifying as much of your task as fully deterministic code (100% accuracy) and gradually reducing non-deterministic LLM calls to the bare minimum.

🚀 How to get started?

from a1 import Agent, tool, LLM
from pydantic import BaseModel

# Define a simple tool
@tool(name="add", description="Add two numbers")
async def add(a: int, b: int) -> int:
    return a + b

# Define input/output schemas
class MathInput(BaseModel):
    problem: str

class MathOutput(BaseModel):
    answer: int

# Create an agent with tools and LLM
agent = Agent(
    name="math_agent",
    description="Solves simple math problems",
    input_schema=MathInput, # like DSPy modules, A1 agent behavior is specified via schemas. The difference is that in A1, an engineer may implement a Verify function to enforce agent-specific constraints such as order of tool calling.
    output_schema=MathOutput,
    tools=[add, LLM(model="gpt-4.1")],  # in A1, LLMs are tools!
)

async def main():
    # Compile ahead-of-time
    compiled = await agent.aot()
    result = await compiled.execute(problem="What is 2 + 2?")
    print(f"AOT result: {result}")

    # Or execute just-in-time
    result = await agent.jit(problem="What is 5 + 3?")
    print(f"JIT result: {result}")

import asyncio
asyncio.run(main())

See the tests/ directory for extensive examples of everything A1 can do. Docs coming soon to docs.a1project.org

✨ Features

  • Import any Langchain agent
  • Observability via OpenTelemetry
  • Tools instantiated from MCP or OpenAPI
  • RAG instantiated given any SQL database or fsspec path (e.g. s3://my-place/here, gs://..., or local filesystem)
  • Skills defined manually or by crawling online docs
  • Context engineering via a simple API that lets compiled code manage multi-agent behavior
  • Zero lock-in use any LLM, any secure code execution cloud
  • Only gets better as researchers develop increasingly powerful methods to Generate, Cost estimate, and Verify agent code

🙋 FAQ

Should I use A1 or Langchain/aisdk/etc?

Prefer A1 if your task is latency-critical, works with untrusted data, or may need to run code.

Is A1 production-ready?

Yes in terms of API stability. The caveat is that A1 is new.

Can we get enterprise support?

Please don't hesitate to reach out (calebwin@stanford.edu)

🤝 Contributing

Awesome! See our Contributing Guide for details.

📄 MIT License

As it should be!

📜 Citation

Paper coming soon! Reach out if you'd like to contribute.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

a1_compiler-0.1.2.tar.gz (12.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

a1_compiler-0.1.2-py3-none-any.whl (82.9 kB view details)

Uploaded Python 3

File details

Details for the file a1_compiler-0.1.2.tar.gz.

File metadata

  • Download URL: a1_compiler-0.1.2.tar.gz
  • Upload date:
  • Size: 12.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.11

File hashes

Hashes for a1_compiler-0.1.2.tar.gz
Algorithm Hash digest
SHA256 29c4013bfeae616cda86bfca1f954ad40804a71bd8a046aea6551da187887260
MD5 443201c43399018b51076bc4077c533b
BLAKE2b-256 89228bc362e9dad42c71bf9fb7fd9e594b8b2ac515c1bb7ae0d26053a8b06a7b

See more details on using hashes here.

File details

Details for the file a1_compiler-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for a1_compiler-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 620a3c4177e3a93f9f8d0adf8399c27b23eda65e50cb2cc332fdf5c63a8cadd3
MD5 2a74c88057d5b6d0756b04c4308f1c8a
BLAKE2b-256 649e9b40c659adb8dde406c49fe7611bf58a18396d7f2678af375760ed38be7d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page