Compose deterministic pipelines for LLM tasks using typed combinators.
Project description
lambda-pipe
Compose deterministic pipelines for LLM tasks using typed combinators.
from lambda_pipe import split, map, reduce
from lambda_pipe.models import openrouter
summarize = (
split(8)
| map("Summarize this section:\n{chunk}")
| reduce("Synthesize:\n{all}")
)
result = await summarize.run(document, openrouter("google/gemini-3-flash-preview"))
9 LLM calls, runs in parallel, done. You know the upper bound before you run it.
Why
Most LLM frameworks let the model decide what to do next. That means you can't bound cost, you can't guarantee it finishes, and you need a big model to get good results.
lambda-pipe flips it: you write the control flow with combinators, the LLM only handles small bounded tasks. A recent paper showed this lets an 8B model match a 70B model.
Install
uv add "lambda-pipe[openrouter]"
export OPENROUTER_API_KEY=your-key
Works with any model on OpenRouter (300+ models). Or use a provider directly:
uv add "lambda-pipe[openai]" # OpenAI
uv add "lambda-pipe[anthropic]" # Anthropic
uv add "lambda-pipe[google]" # Google Gemini
Combinators
There are 7. Pass a string to map, reduce, or filter to make an LLM call.
| Combinator | What it does | LLM? |
|---|---|---|
split(k) |
Chop text into k chunks | no |
peek(start, n) |
Grab a substring | no |
map(fn) |
Run fn on each chunk (parallel) | depends on fn |
filter(pred) |
Keep chunks where pred says "yes" | depends on pred |
reduce(fn) |
Combine everything into one | depends on fn |
concat() |
Join chunks back together | no |
cross() |
All pairs, cartesian product | no |
Compose them with |. That's the whole API.
Patterns
Summarize — split, summarize each chunk in parallel, synthesize:
split(8) | map("Summarize:\n{chunk}") | reduce("Synthesize:\n{all}")
Search — split, toss irrelevant chunks, extract from what's left:
split(20)
| filter("Relevant to '{query}'? yes/no\n{chunk}")
| map("Extract answer:\n{chunk}")
| reduce("Best answer:\n{all}")
Find contradictions — extract claims, pair them all up (free), check each pair:
split(10)
| map("Extract claims:\n{chunk}")
| cross() # 45 pairs, zero LLM calls
| map("Do these contradict?\n{chunk}")
| reduce("Summarize:\n{all}")
Planning
Call .plan() before .run() to get a static cost model. No API calls, just math.
For straight-line pipelines, the call count is exact. If you use model-gated branching like filter(), the plan is a worst-case upper bound.
plan = pipeline.plan(text, model="google/gemini-3-flash-preview")
print(plan)
# Plan(
# llm_calls=9
# estimated_cost=$0.0009
# recursion_depth=0
# )
Recursion
Got a 200-page PDF? .recursive() auto-subdivides until chunks fit the model's context window.
big = (
split(8)
| map("Summarize:\n{chunk}")
| reduce("Synthesize:\n{all}")
).recursive(tau="auto")
tau="auto" picks the threshold from the model name. Built-in providers set it for you. For a custom callable, tag it first:
from lambda_pipe import named_model
async def local_model(prompt: str) -> str:
...
result = await big.run(text, named_model("openai/o4-mini", local_model))
Or pass a token count directly.
Examples
# summarize text
uv run python examples/summarize.py
# search a chemistry textbook
uv run python examples/search.py "What is nuclear fusion?"
# compare 5 models on the same query
uv run python examples/compare.py
# compare one-shot prompting vs a chunked pipeline
uv run python examples/pipeline_vs_one_shot.py
Paper
Inspired by The Y-Combinator for LLMs (Roy et al., 2026). This is a small combinator runtime and planner, not a reproduction of the paper's full system.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lambda_pipe-0.2.0.tar.gz.
File metadata
- Download URL: lambda_pipe-0.2.0.tar.gz
- Upload date:
- Size: 73.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
53185bd89924a82645a22ad1e539a895c7dcda95226cb2de5313e72fc9c446fe
|
|
| MD5 |
6c8cab1570830442534ead94d2f77f17
|
|
| BLAKE2b-256 |
e19df2043afbd7b3fa3bd41d6687a5798047035dc253ef3920c2e53c40710042
|
File details
Details for the file lambda_pipe-0.2.0-py3-none-any.whl.
File metadata
- Download URL: lambda_pipe-0.2.0-py3-none-any.whl
- Upload date:
- Size: 13.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b9e85332b89aba4d400ac3f8a21b720ec91e7976a3ad92fb925d8ec98107a7e3
|
|
| MD5 |
ce41e4da32122543b105c9c0b294f48d
|
|
| BLAKE2b-256 |
9f03f6ed4a85e6847b0fad2866ac074e9b7dffa7e40e2c474601d186b6751605
|