Zapier for AI agents. Connect to any API on the fly.
Project description
Liquid
The agent-native API fabric.
Liquid is the transformation layer between AI agents and any HTTP API — actively optimizing for the constraints real agents hit: token budgets, context windows, cross-API cognitive load, recovery from failures, and predictable cost.
Why agents need more than a tool wrapper
Shipping an agent against real APIs surfaces problems most HTTP clients ignore:
- A single
list_ordersresponse eats 50k tokens of context - Stripe, Shopify, and Square represent "money" in three different shapes
- A 401 from the API returns a string — the agent has to guess how to recover
- Rate limits trip without warning; one agent run costs another one's budget
- The agent has no way to ask "how much will this call cost me?" before making it
Liquid addresses each of these with a concrete primitive. Everything below is shipped and on PyPI.
What Liquid gives your agent
Context-budget control
# Search server-side instead of fetch-then-filter — 10-100x token savings
orders = await liquid.search(
adapter, "/orders",
where={"total_cents": {"$gt": 10000}, "status": "paid"},
limit=20,
)
# Aggregate without ever seeing records
stats = await liquid.aggregate(
adapter, "/orders",
group_by="status",
agg={"total_cents": "sum", "id": "count"},
)
# Full-text search across records (BM25-lite, ranked)
hits = await liquid.text_search(adapter, "/tickets", "shipping delay")
# Fetch only what fits in your budget
data = await liquid.fetch(adapter, "/orders", max_tokens=2000)
# -> _meta.truncated=True, _meta.truncated_at="item_42"
# Identity-plus-two-fields mode for context-constrained runs
data = await liquid.fetch(adapter, "/customers", verbosity="terse")
# Walk pages until a predicate matches, then stop
result = await liquid.fetch_until(
adapter, "/orders",
predicate={"customer_email": {"$eq": "vip@co.com"}},
max_pages=20,
)
Cross-API normalization
liquid = Liquid(..., normalize_output=True)
# Stripe: {amount: 1000, currency: "usd"}
# PayPal: {value: "10.00", currency_code: "USD"}
# Square: {amount: 1000, currency: "USD"}
# All three normalize to:
Money(amount_cents=1000, currency="USD", amount_decimal=Decimal("10.00"))
Unix timestamps, ISO 8601, and RFC 2822 dates all collapse to datetime in UTC. Pagination envelopes ({data: [...]} / {results: [...]} / {items: [...]} / Link headers) flatten to a single PaginationEnvelope. ID fields normalize across id / _id / uid / uuid / *_id conventions.
Intent layer — canonical operations across APIs
# Same intent, any supported API
await liquid.execute_intent("charge_customer", {
"customer_id": "cus_xyz",
"amount_cents": 9999,
"currency": "USD",
})
# Works on Stripe, Braintree, Square, Adyen — one agent mental model
Ten canonical intents ship today: charge_customer, refund_charge, create_customer, update_customer, list_orders, cancel_order, send_email, post_message, create_ticket, close_ticket.
Structured recovery — agents self-heal without parsing text
try:
await liquid.fetch(adapter, "/orders")
except LiquidError as e:
if e.recovery and e.recovery.next_action:
# Agent dispatches the action directly — zero text parsing
await agent.call_tool(
e.recovery.next_action.tool,
e.recovery.next_action.args,
)
Every Fetcher / Executor error carries a Recovery with next_action: ToolCall, retry_safe: bool, and retry_after_seconds where applicable. 401 → store_credentials. 404/410 → repair_adapter. 429 → retry with retry_after_seconds.
Predictable cost — know before you call
est = await liquid.estimate_fetch(adapter, "/orders")
# FetchEstimate(
# expected_items=250, expected_tokens=52_000, expected_cost_credits=1,
# expected_latency_ms=800, confidence="high", source="empirical"
# )
if est.expected_tokens < my_budget:
data = await liquid.fetch(adapter, "/orders")
Every tool emitted by to_tools() also carries a metadata block with cost_credits, typical_latency_ms, cached, cache_ttl_seconds, idempotent, side_effects, expected_result_size, and related_tools so agents can reason about which tool to pick.
Ambient state — no memorization needed
tools = await liquid.to_tools(format="anthropic")
# Auto-includes: liquid_check_quota, liquid_list_adapters, liquid_health_check,
# liquid_check_rate_limit, liquid_get_adapter_info, liquid_estimate_fetch,
# liquid_aggregate, liquid_text_search, liquid_search_nl, liquid_fetch_until,
# liquid_fetch_changes_since
The agent asks "how much budget do I have left?" by calling a tool instead of remembering state in its working memory (where it's unreliable).
Response _meta — provenance and truncation signals
liquid = Liquid(..., include_meta=True)
data = await liquid.fetch(adapter, "/orders")
# {
# "data": [...],
# "_meta": {
# "source": "cache", "age_seconds": 180, "fresh": True,
# "truncated": False, "total_count": 523, "next_cursor": "...",
# "adapter": "shopify", "endpoint": "/orders",
# "fetched_at": "2026-04-20T10:00:00Z", "confidence": 0.93
# }
# }
Measured impact
Deterministic benchmarks on realistic agent tasks (500-order, 200-ticket fixtures, mocked HTTP) — reproducible via python -m benchmarks.run:
| Task | Metric | Baseline | With Liquid | Delta |
|---|---|---|---|---|
| Find 10 orders over $100 | tokens | 75,482 | 1,519 | −98% |
| Revenue by status (aggregate) | tokens | 75,482 | 115 | −100% |
| Fetch customer (id+email only) | tokens | 424 | 12 | −97% |
| Recover from 401 | structured next_action | no | yes | — |
| Find the shipping ticket | tokens | 14,588 | 154 | −99% |
| Stripe↔PayPal consistency | field overlap | 0.11 | 1.00 | +9× |
| Skip wasted call via estimate | tokens | 14,943 | 0 | −100% |
max_tokens=2000 budget cap |
tokens | 14,943 | 1,999 | −87% |
Full methodology + per-task breakdown: benchmarks/RESULTS.md.
Install
pip install liquid-api
# Framework integrations
pip install liquid-langchain # LangChain / LangGraph
pip install liquid-crewai # CrewAI
Quick start — LangGraph agent with Shopify
from liquid import Liquid, InMemoryCache, RateLimiter
from liquid._defaults import InMemoryVault, InMemoryAdapterRegistry, CollectorSink
from liquid_langchain import LiquidToolkit
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
liquid = Liquid(
llm=my_llm,
vault=InMemoryVault(),
sink=CollectorSink(),
registry=InMemoryAdapterRegistry(),
cache=InMemoryCache(),
rate_limiter=RateLimiter(),
normalize_output=True, # cross-API canonical shapes
include_meta=True, # _meta block on every response
)
adapter = await liquid.get_or_create(
"https://api.shopify.com",
target_model={"id": "str", "total_cents": "int", "customer_email": "str"},
credentials={"access_token": "shpat_..."},
auto_approve=True,
)
tools = LiquidToolkit(adapter, liquid).get_tools()
agent = create_react_agent(ChatOpenAI(model="gpt-4o-mini"), tools)
result = await agent.ainvoke({
"messages": [("user", "Find 5 recent orders over $100 from VIP customers")],
})
The agent's tools come with rich descriptions (WHEN to use, NOT FOR what, return shape, cost), structured recovery on every error, and server-side search so it never pulls 500 orders to find 5.
Framework support
# Anthropic tool use
tools = adapter.to_tools(format="anthropic")
# OpenAI function calling
tools = adapter.to_tools(format="openai")
# MCP (Claude Desktop, Cursor)
tools = adapter.to_tools(format="mcp")
# CrewAI
from liquid_crewai import LiquidCrewToolkit
tools = LiquidCrewToolkit(adapter, liquid).get_tools()
# Opt out of metadata block on tools
tools = adapter.to_tools(format="openai", include_metadata=False)
Architecture
URL Agent
↓ ↑
DISCOVERY FETCH / EXECUTE / SEARCH / AGGREGATE
↓ ↑
MCP → OpenAPI → GraphQL Deterministic HTTP + transforms
→ REST heuristic → Browser • Query DSL (server-side filter)
↓ • Output normalization
APISchema • Verbosity / max_tokens / _meta
↓ • Structured recovery
AI MAPPING (setup only) • Rate-limit-aware token bucket
↓ • Response cache (Cache-Control aware)
AdapterConfig • Empirical probing data (Cloud)
AI participates at setup only. Runtime is pure HTTP with transforms — no LLM per call, predictable cost, reproducible behavior. The agent UX layer on top doesn't call an LLM either (except search_nl, which caches compilations).
Discovery pipeline
| Method | Where it looks | Cost |
|---|---|---|
| MCP | /mcp |
Low (native protocol) |
| OpenAPI | /openapi.json, /swagger.json, /v3/api-docs |
Low |
| GraphQL | /graphql (introspection) |
Low |
| REST heuristic | common paths + LLM interpretation | Medium |
| Browser | Playwright capturing network | High |
2,500+ APIs are pre-discovered and pre-mapped in the global catalog — most popular services connect with zero discovery cost.
Protocols
Every component is a swappable Protocol:
from liquid.protocols import (
Vault, LLMBackend, DataSink, KnowledgeStore,
AdapterRegistry, CacheStore,
)
In-memory implementations ship for all of them. liquid-cloud provides PostgresVault, RedisCache, etc. for hosted deployments.
Ecosystem
| Package | Purpose |
|---|---|
liquid-api |
Core library (this repo) |
liquid-langchain |
LangChain / LangGraph integration |
liquid-crewai |
CrewAI integration |
liquid-cli |
liquid init quickstart |
| Liquid Cloud | Hosted service + global catalog + empirical probing |
Examples
examples/langchain_agent.py— LangGraph ReAct agentexamples/anthropic_tools.py— Claude tool-use loopexamples/openai_agents.py— OpenAI Assistants
Comparison
| Feature | Liquid | Zapier | LangChain tool | DIY |
|---|---|---|---|---|
| API discovery | yes | no | no | no |
| Server-side search / aggregate | yes | no | no | partial |
| Cross-API output normalization | yes | partial | no | no |
| Structured recovery with next_action | yes | no | no | no |
| Intent layer (canonical operations) | yes | partial | no | no |
| Pre-flight cost estimate | yes | no | no | no |
| Self-healing on schema drift | yes | no | no | no |
| MCP + A2A + LangChain + CrewAI native | yes | no | partial | no |
| Open source | yes | no | yes | n/a |
Documentation
- Quickstart
- Architecture
- Extending — implement your own Vault / LLM / Sink
- Write operations spec
- Benchmarks — quantitative evidence for each feature
License
AGPL-3.0. Commercial license available for closed-source deployments — contact hello@ertad.com.
Contributing
Community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file liquid_api-0.25.0.tar.gz.
File metadata
- Download URL: liquid_api-0.25.0.tar.gz
- Upload date:
- Size: 326.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
835882f55a45e528ea85b2c3b7d45f6bb6f13d48d062b41661d09e4a5ae9f893
|
|
| MD5 |
ba4b81c1f5f39738dd65cb0299233453
|
|
| BLAKE2b-256 |
74e28605fb47f9060217a9b20d787b2d20e400293a89bfab9d5a7a99d881466b
|
File details
Details for the file liquid_api-0.25.0-py3-none-any.whl.
File metadata
- Download URL: liquid_api-0.25.0-py3-none-any.whl
- Upload date:
- Size: 181.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3d80c21080a1572267e677a3f61d7869e3addc6d178d1b5f5775b1cede038c2f
|
|
| MD5 |
e75ec94b65e0cd18b6d5a037811b0faf
|
|
| BLAKE2b-256 |
93c8f3c704dda1dca61c1f01b418dcd141f0d25603006b38591adf363e8038cc
|