Skip to main content

Zapier for AI agents. Connect to any API on the fly.

Project description

Liquid

Zapier for AI agents. Connect to any API on the fly.

PyPI Tests License Python


Your AI agent needs data from Shopify. Or Stripe. Or some internal ERP. With Liquid, the agent just says what it needs — Liquid discovers the API, maps the data, and delivers it. No pre-built connectors. No adapters to write. Integrations maintain themselves.

from liquid import Liquid
from liquid._defaults import InMemoryVault, InMemoryAdapterRegistry, CollectorSink

agent = Liquid(llm=my_llm, vault=InMemoryVault(), sink=CollectorSink(),
               registry=InMemoryAdapterRegistry())

# Agent says: "I need Shopify order data shaped like this"
adapter = await agent.get_or_create(
    url="https://api.shopify.com",
    target_model={"amount": "float", "date": "datetime", "customer": "string"},
    auto_approve=True,
)

# Fetch data — mapped to agent's model, ready to use
orders = await agent.fetch(adapter, "/orders")
# [{"amount": 99.0, "date": "2024-01-15", "customer": "alice@example.com"}, ...]

The Problem

AI agents need to connect to external services. Today, each service requires a pre-built connector — custom code for endpoints, auth, pagination, and data mapping. 50 services = 50 connectors. When an API changes, the connector breaks silently.

How Liquid Works

Agent: "I need Shopify orders"
    │
    ▼
┌─ Liquid ──────────────────────────────────────────────┐
│                                                        │
│  1. Registry check  →  Already connected? Return it.  │
│                                                        │
│  2. Discovery        →  MCP / OpenAPI / GraphQL /     │
│     (AI, once)          REST probe / Browser capture   │
│                                                        │
│  3. Field mapping   →  AI maps Shopify fields to      │
│     (AI, once)          agent's data model             │
│                                                        │
│  4. Fetch data      →  Deterministic, zero LLM calls  │
│     (code, always)                                     │
│                                                        │
│  5. API changed?    →  Auto-repair: diff → re-map     │
│     (self-healing)      only broken fields             │
└────────────────────────────────────────────────────────┘
    │
    ▼
Agent gets typed data in its own model

AI runs once during setup. After that — pure code, zero LLM calls, deterministic results.

Why Liquid

Any API, no connectors — Give it a URL, it figures out the rest. No YAML configs, no connector marketplace, no waiting for someone to build an integration.

Agent-native — Designed for AI agents, not humans clicking in a GUI. Programmatic API, async-first, typed results.

Self-healing integrationsrepair_adapter() diffs schemas and re-maps only broken fields. Working mappings stay untouched. Agents don't break when APIs change.

Registry — First agent connects to Shopify → second agent reuses the same integration. No duplicate work.

Learning — Corrections improve future proposals. Connect Shopify for the 51st time → mapping is instant.

Zero runtime AI — Discovery and mapping use LLM once. Fetching data is pure Python — fast, cheap, predictable.

Discovery: 5 Strategies, Cheapest First

Priority Strategy When it works AI needed?
1 MCP Service publishes an MCP server No
2 OpenAPI Has /openapi.json or /swagger.json No
3 GraphQL Has /graphql with introspection No
4 REST Heuristic REST API without spec Yes (once)
5 Browser No API at all — capture network traffic Yes (once)

~70% of modern APIs have OpenAPI or GraphQL — Liquid doesn't even use AI for those.

Installation

pip install liquid-api              # core
pip install liquid-api[mcp]         # + MCP server discovery
pip install liquid-api[browser]     # + Playwright browser discovery

Quick Example

from liquid import Liquid, AdapterRegistry
from liquid._defaults import InMemoryVault, InMemoryAdapterRegistry, CollectorSink

# Setup — provide your LLM, credential store, data sink, and registry
liquid = Liquid(
    llm=my_llm_backend,           # Claude, GPT, Llama — any LLM
    vault=InMemoryVault(),         # or Postgres, AWS Secrets Manager, etc.
    sink=CollectorSink(),          # or your database, queue, webhook
    registry=InMemoryAdapterRegistry(),  # or Postgres, Redis, etc.
)

# Connect to any service — Liquid handles discovery and mapping
adapter = await liquid.get_or_create(
    url="https://api.stripe.com",
    target_model={"amount": "float", "currency": "string", "status": "string"},
    credentials={"access_token": "sk_live_..."},
    auto_approve=True,
)

# Fetch data — already mapped to your model
payments = await liquid.fetch(adapter, "/v1/charges")

# API changed? Liquid fixes it
repaired = await liquid.repair_adapter(adapter, target_model, auto_approve=True)

Extension Points

Liquid is a library, not a framework. Bring your own implementations:

Protocol Purpose You provide
Vault Credential storage Postgres, AWS Secrets Manager, etc.
LLMBackend AI provider Claude, GPT, Llama, any LLM
DataSink Where fetched data goes Database, queue, webhook, file
AdapterRegistry Integration storage Postgres, Redis, file system
KnowledgeStore Shared mapping patterns Redis, central registry, or disabled

Liquid vs Alternatives

Liquid Zapier Airbyte Nango Custom code
Designed for AI agents Humans (GUI) Data teams Developers Developers
New service get_or_create(url) Browse marketplace Write YAML connector Write TypeScript Write adapter
When API changes Self-heals Breaks silently Update connector Update code Debug manually
Runtime AI calls Zero N/A Zero Zero N/A
Integration reuse Registry Per-account Per-deployment Per-deployment None
License AGPL-3.0 Proprietary ELv2 AGPL-3.0 Yours

Open Source + Commercial

Liquid OSS (this repo, AGPL-3.0) — the engine. Discovery, mapping, fetching, auto-repair. You run it, you own it.

Liquid Cloud (coming soon) — hosted runtime. Pre-built integrations for 100+ services, shared knowledge base, health monitoring dashboard, managed credentials. For teams that want it to just work.

Documentation

Contributing

We welcome contributions! Check out our contributing guide and browse good first issues.

License

AGPL-3.0. Commercial licenses available — contact us.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

liquid_api-0.4.0.tar.gz (61.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

liquid_api-0.4.0-py3-none-any.whl (42.9 kB view details)

Uploaded Python 3

File details

Details for the file liquid_api-0.4.0.tar.gz.

File metadata

  • Download URL: liquid_api-0.4.0.tar.gz
  • Upload date:
  • Size: 61.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for liquid_api-0.4.0.tar.gz
Algorithm Hash digest
SHA256 8929b29b76319b9f0618ece892fa67acd8c44e18a63016f2cc20b04e3a8d355a
MD5 f75528bbff799362842ecbfe539e85c3
BLAKE2b-256 f363026660ee24495febfd3b043839c2edb782ed9a49d4000b10f0d7cbf31cf1

See more details on using hashes here.

File details

Details for the file liquid_api-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: liquid_api-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 42.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for liquid_api-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 10140d615d03e5c2e1e13f8dfee07e9d437b089bb2c018f0041fb5553c57a161
MD5 a9bbc4950e0d01ed4c39e01977cd557f
BLAKE2b-256 4c9c9c209c0dedab71825156861be4d8f377c8f89770679188894a272294eabf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page