Skip to main content

A modern agent compiler for building and executing LLM-powered agents

Project description

BLAST Logo

The agent compiler framework

Documentation Discord Twitter Follow

A1 is a new kind of agent framework. It takes an Agent (a set of tools and a description) and compiles either AOT (ahead-of-time) into a Tool or JIT (just-in-time) for immediate execution optimized for each unique agent input.

uv pip install a1-compiler
# or
pip install a1-compiler

🏎️ Why use an agent compiler?

An agent compiler is a direct replacement for agent frameworks such as Langchain or aisdk, where you define an Agent and run. The diference is:

  1. Safety: A1 generates code for each unique agent input, optimizing constantly to shrink the prompt injection attack surface.
  2. Speed: A1 makes codegen practical for tool-wielding agents with aggressive parallelism and static checking.
  3. Determinism: A1 optimizes for determinism via an engineered cost function. For example, it may replace an LLM call with a fast RegEx but may revert on-the-fly if a tool's schema evolves.
  4. Flexibility A tool in A1 can be instantly constructed from an OpenAPI document, an MCP server, a DB connection string, an fsspec path, a Python function, a Python package, or even just a documentation website URL.

Agent compilers emerged from frustration with the MCP protocol and SOTA agent frameworks where every agent runs a static while loop program. Slow, unsafe, and highly nondeterministic.

An agent compiler can perform the same while loop (just set Verify=IsLoop()) but has the freedom to explore superoptimal execution plans, while subject to engineered constraints (e.g. type-safety).

Ultimately the goal is "determinism-maxing": specifying as much of your task as fully deterministic code (100% accuracy) and gradually reducing non-deterministic LLM calls to the bare minimum.

🚀 How to get started?

from a1 import Agent, tool, LLM
from pydantic import BaseModel

# Define a simple tool
@tool(name="add", description="Add two numbers")
async def add(a: int, b: int) -> int:
    return a + b

# Define input/output schemas
class MathInput(BaseModel):
    problem: str

class MathOutput(BaseModel):
    answer: int

# Create an agent with tools and LLM
agent = Agent(
    name="math_agent",
    description="Solves simple math problems",
    input_schema=MathInput, # like DSPy modules, A1 agent behavior is specified via schemas. The difference is that in A1, an engineer may implement a Verify function to enforce agent-specific constraints such as order of tool calling.
    output_schema=MathOutput,
    tools=[add, LLM(model="gpt-4.1")],  # in A1, LLMs are tools!
)

async def main():
    # Compile ahead-of-time
    compiled = await agent.aot()
    result = await compiled.execute(problem="What is 2 + 2?")
    print(f"AOT result: {result}")

    # Or execute just-in-time
    result = await agent.jit(problem="What is 5 + 3?")
    print(f"JIT result: {result}")

import asyncio
asyncio.run(main())

See the tests/ directory for extensive examples of everything A1 can do. Docs coming soon to docs.a1project.org

✨ Features

  • Import any Langchain agent
  • Observability via OpenTelemetry
  • Tools instantiated from MCP or OpenAPI
  • RAG instantiated given any SQL database or fsspec path (e.g. s3://my-place/here, gs://..., or local filesystem)
  • Skills defined manually or by crawling online docs
  • Context engineering via a simple API that lets compiled code manage multi-agent behavior
  • Zero lock-in use any LLM, any secure code execution cloud
  • Only gets better as researchers develop increasingly powerful methods to Generate, Cost estimate, and Verify agent code

🙋 FAQ

Should I use A1 or Langchain/aisdk/etc?

Prefer A1 if your task is latency-critical, works with untrusted data, or may need to run code.

Is A1 production-ready?

Yes in terms of API stability. The caveat is that A1 is new.

Can we get enterprise support?

Please don't hesitate to reach out (calebwin@stanford.edu)

🤝 Contributing

Awesome! See our Contributing Guide for details.

📄 MIT License

As it should be!

📜 Citation

Paper coming soon! Reach out if you'd like to contribute.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

a1_compiler-0.1.3.tar.gz (12.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

a1_compiler-0.1.3-py3-none-any.whl (92.0 kB view details)

Uploaded Python 3

File details

Details for the file a1_compiler-0.1.3.tar.gz.

File metadata

  • Download URL: a1_compiler-0.1.3.tar.gz
  • Upload date:
  • Size: 12.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.11

File hashes

Hashes for a1_compiler-0.1.3.tar.gz
Algorithm Hash digest
SHA256 1e63cf596272383aae30f55a796ab68817e40bc926b568cc2581b422b2e9737e
MD5 873972a3ee049ab0cda10321bfe5b1ac
BLAKE2b-256 5635d5f881172cb2459356776b5189aca0160b8fbffb386f14ad6c146dcdbceb

See more details on using hashes here.

File details

Details for the file a1_compiler-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for a1_compiler-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5f904592b1a13bbad3aad370967d1b6a0e0c93e3e28f33ba42b7bcdcbe4cfd9b
MD5 598c305e458f7b6cca8d61ef0502fd8d
BLAKE2b-256 f330268c62bd2613cec9d6c239f5c31f03e8332984408752c3c2aaa780f0040f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page