Skip to main content

A respx-backed fake of the LlamaCloud v2 API for use in tests

Project description

llama-cloud-fake

A respx-backed fake of the LlamaCloud v2 HTTP API for tests.

Enter FakeLlamaCloudServer as a context manager and it intercepts outbound calls to api.cloud.llamaindex.ai, serving deterministic canned responses for the namespaces your workflow actually exercises (files, extract, parse, pipelines, classify, agent_data, configurations, sheets, split). The real llama_cloud client drives the fake unchanged — point it at the fake by env var and run your workflow end-to-end without hitting the real API.

Install

pip install llama-cloud-fake

Test-time dependency — add it to your dev group (e.g. [dependency-groups].dev in pyproject.toml).

Quick start

import pytest
from llama_cloud import AsyncLlamaCloud
from llama_cloud_fake import FakeLlamaCloudServer
from pydantic import BaseModel

class Receipt(BaseModel):
    merchant: str
    total: float

@pytest.fixture(autouse=True)
def _fake_env(monkeypatch):
    monkeypatch.setenv("LLAMA_CLOUD_API_KEY", "unit-test-key")
    monkeypatch.setenv("LLAMA_CLOUD_BASE_URL", FakeLlamaCloudServer.DEFAULT_BASE_URL)

@pytest.mark.asyncio
async def test_extract(tmp_path):
    sample = tmp_path / "receipt.txt"
    sample.write_text("Merchant: Lunar Bistro\nTotal: 123.45")

    with FakeLlamaCloudServer():
        client = AsyncLlamaCloud(api_key="unit-test-key")
        file = await client.files.create(file=sample, purpose="extract")
        job = await client.extract.run(
            file_input=file.id,
            configuration={
                "data_schema": Receipt.model_json_schema(),
                "tier": "cost_effective",
            },
        )
        assert job.status == "COMPLETED"
        assert "merchant" in job.extract_result

Responses are derived deterministically from the request (file hash, schema hash), so repeated calls return the same payloads. Unmatched requests fail respx assertions.

Overriding responses

Each namespace exposes stub_* methods to inject a canned result for specific requests. Later stubs win; they fall through to the deterministic default if no matcher fits.

from llama_cloud_fake import (
    FakeLlamaCloudServer,
    FileMatcher,
    RequestMatcher,
    SchemaMatcher,
)

with FakeLlamaCloudServer() as fake:
    fake.extract.stub_run(
        RequestMatcher(
            file=FileMatcher(filename="receipt.txt"),
            schema=SchemaMatcher(model=Receipt),
        ),
        data={"merchant": "Lunar Bistro", "total": 123.45},
        status="COMPLETED",
    )
    # ...run your workflow; the extract.run call above now returns the stub.

RequestMatcher also accepts agent_id, project_id, organization_id, and a custom predicate: Callable[[httpx.Request], bool].

Preloading files

Skip the client.files.create round-trip in tests that just need a file id:

with FakeLlamaCloudServer() as fake:
    file_id = fake.files.preload(path="tests/fixtures/invoice.pdf")
    # pass file_id straight into extract / parse / classify calls

Source

The source for this package isn't hosted in a public repository. It's MIT-licensed and the wheel on PyPI ships the full source — unpack it and copy whatever you find useful.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cloud_fake-0.1.1.tar.gz (21.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_cloud_fake-0.1.1-py3-none-any.whl (30.1 kB view details)

Uploaded Python 3

File details

Details for the file llama_cloud_fake-0.1.1.tar.gz.

File metadata

  • Download URL: llama_cloud_fake-0.1.1.tar.gz
  • Upload date:
  • Size: 21.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_cloud_fake-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a72e66c41471b6b2dadc62b746f6075c57eed4a900986e8fa0319233cd97b13a
MD5 e15db1aec61473ecc04d05448bf605e6
BLAKE2b-256 454761756fbc1de8ec1d2c13be9da9e690d812356575a384e58c6bc16dde6837

See more details on using hashes here.

File details

Details for the file llama_cloud_fake-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llama_cloud_fake-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 30.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_cloud_fake-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 17622db60cfb061d02aae113d58297ecfcf263e18650c46fc1f01fb86b1f74c7
MD5 0e7071d6a9dc74076a085d69a330912a
BLAKE2b-256 cd53eca171455b9154143dd9bbd36ea99650302923db7ab447c3cde909127fd8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page