Lean API Platform -- Token-efficient API specs for AI agents
Project description
Lean API Platform
Agent-Native API specs. Verified, compressed, ready to install.
Website · Registry · Benchmarks · Docs
Without API documentation, LLM agents hallucinate endpoints, invent parameters, and guess auth flows -- scoring just 0.399 accuracy in blind tests.
LAP fixes this. One command gives your agent a verified, agent-native API spec -- jumping accuracy to 0.860. And because LAP specs are up to 10x smaller than raw OpenAPI, you also save 35% on cost and run 29% faster.
Not minification -- a purpose-built compiler with its own grammar.
Proven in 500 blind runs across 50 APIs
LAP Lean scored 0.851 (vs 0.825 raw) while using 35% less cost and 29% less time -- same accuracy, far fewer tokens.
Full benchmark report (500 runs, 50 specs, 5 formats) · Benchmark methodology and data
Quick Start
# Search the registry for an API
npx @lap-platform/lapsh search payment
# Download a spec
npx @lap-platform/lapsh get stripe-com -o stripe.lap
# Install a pre-compiled skill (drops into ~/.claude/skills/)
npx @lap-platform/lapsh skill-install stripe-com
# Or compile your own spec
npx @lap-platform/lapsh compile api.yaml --lean
Use as an Agent Skill
Install the LAP skill so your agent can search, compile, and manage APIs automatically:
Claude Code:
cp -r skills/lap ~/.claude/skills/lap
OpenClaw: install from ClawHub or copy manually:
cp -r skills/lap ~/.openclaw/skills/lap
Once installed, agents auto-trigger the skill when working with APIs -- or invoke it directly with /lap. You can also install individual API skills for specific integrations:
npx @lap-platform/lapsh skill-install stripe-com
# Agent now knows the full Stripe API
Want to get listed? Register as a verified publisher and share your specs and skills with the registry.
# Install globally (npm or pip)
npm install -g @lap-platform/lapsh
pip install lapsh
What You Get
- 📦 Registry — browse and install 1500+ pre-compiled specs at lap.sh
- 🗜️ 5.2× median compression on OpenAPI, up to 39.6× on large specs — 35% cheaper, 29% faster (benchmarks)
- 📐 Typed contracts —
enum(a|b|c),str(uuid),int=10prevent agent hallucination - 🔌 6 input formats — OpenAPI, GraphQL, AsyncAPI, Protobuf, Postman, Smithy
- 🎯 Zero information loss — every endpoint, param, and type constraint preserved
- 🔁 Round-trip — convert back to OpenAPI with
lapsh convert - 🤖 Skill generation —
lapsh skillcreates agent-ready skills from any spec - 🔗 Integrations — LangChain, Context Hub, Python/TypeScript SDKs
How It Works
Five compression stages, each targeting a different source of token waste:
| Stage | What it does | Savings |
|---|---|---|
| Structural removal | Strip YAML scaffolding — paths:, requestBody:, schema: wrappers vanish |
~30% |
| Directive grammar | @directives replace nested structures with flat, single-line declarations |
~25% |
| Type compression | type: string, format: uuid → str(uuid) |
~10% |
| Redundancy elimination | Shared fields extracted once via @common_fields and @type |
~20% |
| Lean mode | Strip descriptions — LLMs infer meaning from well-named parameters | ~15% |
Benchmarks
1,500+ specs · 5,228 endpoints · 4.37M → 423K tokens
| Format | Specs | Median | Best |
|---|---|---|---|
| OpenAPI | 30 | 5.2× | 39.6× |
| Postman | 36 | 4.1× | 24.9× |
| Protobuf | 35 | 1.5× | 60.1× |
| AsyncAPI | 31 | 1.4× | 39.1× |
| GraphQL | 30 | 1.3× | 40.9× |
Verbose formats compress most — they carry the most structural overhead. Already-concise formats like GraphQL still benefit from type deduplication.
The Ecosystem
LAP is more than a compiler:
| Component | What | Command |
|---|---|---|
| Search | Find APIs in the registry | lapsh search payment |
| Get | Download a spec by name | lapsh get stripe-com |
| Skill Install | Install a pre-compiled skill | lapsh skill-install stripe-com |
| Compiler | Any spec → .lap |
lapsh compile api.yaml |
| Skill Generator | Create agent-ready skills from any spec | lapsh skill api.yaml --install |
| API Differ | Detect breaking API changes | lapsh diff old.lap new.lap |
| Round-trip | Convert LAP back to OpenAPI | lapsh convert api.lap -f openapi |
| Publish | Share specs to the registry | lapsh publish api.yaml --provider acme |
Claude Code users: The
lapskill is included -- your agent can search and install APIs directly. Seeskills/lap/SKILL.md.
Supported Formats
lapsh compile api.yaml # OpenAPI 3.x / Swagger
lapsh compile schema.graphql # GraphQL SDL
lapsh compile events.yaml # AsyncAPI
lapsh compile service.proto # Protobuf / gRPC
lapsh compile collection.json # Postman v2.1
lapsh compile model.smithy # AWS Smithy
Format is auto-detected. Override with -f openapi|graphql|asyncapi|protobuf|postman|smithy.
Top Compressions
Integrations
# LangChain
from lap.middleware import LAPDocLoader
docs = LAPDocLoader("stripe.lap").load()
LangChain, Context Hub, and Python/TypeScript SDKs. See integration docs.
FAQ
Why do agents hallucinate API calls?
Because they have no way to find the spec, and even if they could, it's a million tokens of YAML written for humans. Agents without specs score 0.399 accuracy -- wrong 60% of the time. They hallucinate endpoint paths, send invalid types, and miss auth. Give them a LAP spec and accuracy jumps to 0.860. The spec doesn't make the agent smarter. It makes guessing unnecessary.
How is this different from OpenAPI?
LAP doesn't replace OpenAPI — it compiles FROM it. Like TypeScript → JavaScript: you keep your OpenAPI specs, your existing tooling, everything. LAP adds a compilation step for the LLM runtime.
How is this different from MCP?
MCP defines how agents discover and invoke tools (the plumbing). LAP compresses the documentation those tools expose (the payload). They're complementary — LAP can compress MCP tool schemas.
Why not just minify the JSON?
Minification removes whitespace — that's ~10% savings. LAP performs semantic compression: flattening nested structures, deduplicating schemas, compressing type declarations, and stripping structural overhead. That's 5-40× savings. Different class of tool.
What about prompt caching?
Use both. Compress with LAP first, then cache the compressed version. LAP reduces the first-call cost and frees context window space. Caching reduces repeated-call cost. They stack.
Will LLMs understand this format?
Yes. LAP uses conventions LLMs already know — @directive syntax, {name: type} notation, HTTP methods and paths. In blind tests, agents produce identical correct output from LAP and raw OpenAPI. The typed contracts actually reduce hallucination.
What if token costs keep dropping?
Cost is the least important argument. The core value is typed contracts: enum(succeeded|pending|failed) prevents hallucinated values regardless of token price. Plus: formal grammar (parseable by code, not just LLMs), schema diffing, and faster inference from fewer input tokens.
Contributing
See CONTRIBUTING.md. The test suite has 1,101 tests across 190+ example specs in 6 formats.
git clone https://github.com/Lap-Platform/lap.git
cd lap
pip install -e ".[dev]"
pytest
License
Apache 2.0 — See NOTICE for attribution.
lap.sh · Built by the LAP team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lapsh-0.4.8.tar.gz.
File metadata
- Download URL: lapsh-0.4.8.tar.gz
- Upload date:
- Size: 139.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
11528bf199e1e0af3ebe454967f3da5af033f4ffff845a4db03d8bcd11d3d210
|
|
| MD5 |
49b47b67efda0d081954daa81dbd7189
|
|
| BLAKE2b-256 |
2c54009c661cb496180dd5add9ae4cb8ea5965446b5f7c86affa4b7b4d73c2af
|
Provenance
The following attestation bundles were made for lapsh-0.4.8.tar.gz:
Publisher:
publish-pypi.yml on Lap-Platform/LAP
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lapsh-0.4.8.tar.gz -
Subject digest:
11528bf199e1e0af3ebe454967f3da5af033f4ffff845a4db03d8bcd11d3d210 - Sigstore transparency entry: 1109326982
- Sigstore integration time:
-
Permalink:
Lap-Platform/LAP@be2ad2fcd6b99957222608eda216f1c674270d67 -
Branch / Tag:
refs/tags/v0.4.8 - Owner: https://github.com/Lap-Platform
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@be2ad2fcd6b99957222608eda216f1c674270d67 -
Trigger Event:
release
-
Statement type:
File details
Details for the file lapsh-0.4.8-py3-none-any.whl.
File metadata
- Download URL: lapsh-0.4.8-py3-none-any.whl
- Upload date:
- Size: 89.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0176b97a15e1dedd1a59e9bd2355babf8e5e3ddc825a48912fec7b89b803105
|
|
| MD5 |
0331cc9933df8711a48ac225ee8a9d6f
|
|
| BLAKE2b-256 |
d42c3696f3eeaccb80004249a9904fe19f3f56dc8c2faa569638cc2a4aa9b8cc
|
Provenance
The following attestation bundles were made for lapsh-0.4.8-py3-none-any.whl:
Publisher:
publish-pypi.yml on Lap-Platform/LAP
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lapsh-0.4.8-py3-none-any.whl -
Subject digest:
f0176b97a15e1dedd1a59e9bd2355babf8e5e3ddc825a48912fec7b89b803105 - Sigstore transparency entry: 1109326984
- Sigstore integration time:
-
Permalink:
Lap-Platform/LAP@be2ad2fcd6b99957222608eda216f1c674270d67 -
Branch / Tag:
refs/tags/v0.4.8 - Owner: https://github.com/Lap-Platform
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@be2ad2fcd6b99957222608eda216f1c674270d67 -
Trigger Event:
release
-
Statement type: