Skip to main content

Lightweight async client for Crawl4AI Docker server — no browser dependencies required

Project description

crawl4ai-client

Lightweight async Python client for Crawl4AI Docker server.

No browser dependencies required. Just httpx + pydantic (~2MB vs ~500MB for the full crawl4ai package).

Install

pip install crawl4ai-client

Quick Start

import asyncio
from crawl4ai_client import Crawl4aiDockerClient

async def main():
    async with Crawl4aiDockerClient(
        base_url="http://localhost:11235",
        api_token="your-token",  # optional
    ) as client:
        result = await client.crawl(["https://example.com"])
        print(result.raw_markdown)

asyncio.run(main())

Features

  • Crawl single or multiple URLs
  • Stream results as they complete
  • Get markdown via the /md endpoint
  • Screenshots and schema retrieval
  • Per-URL configs for batch crawling (crawler_configs list)
  • Async context manager with automatic cleanup

Usage

Basic crawl

from crawl4ai_client import Crawl4aiDockerClient, CrawlerRunConfig, CacheMode

async with Crawl4aiDockerClient(base_url="http://localhost:11235") as client:
    result = await client.crawl(
        ["https://example.com"],
        crawler_config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
    )
    print(result.raw_markdown)

Multiple URLs with per-URL configs

from crawl4ai_client import Crawl4aiDockerClient, CrawlerRunConfig

async with Crawl4aiDockerClient(base_url="http://localhost:11235") as client:
    results = await client.crawl(
        ["https://example.com", "https://httpbin.org/html"],
        crawler_configs=[
            CrawlerRunConfig(word_count_threshold=5),
            CrawlerRunConfig(word_count_threshold=50),
        ],
    )
    for r in results:
        print(f"{r.url}: {len(r.raw_markdown)} chars")

Streaming

async with Crawl4aiDockerClient(base_url="http://localhost:11235") as client:
    async for result in client.crawl_stream(["https://example.com", "https://httpbin.org/html"]):
        print(f"Got: {result.url}")

Markdown endpoint

async with Crawl4aiDockerClient(base_url="http://localhost:11235") as client:
    md = await client.get_markdown("https://example.com", content_filter="fit")
    print(md)

Why this package?

The full crawl4ai package installs 34+ dependencies (~500MB) including Playwright, browsers, numpy, and litellm. If you're running Crawl4AI as a Docker service and only need the client, this package gives you the same Crawl4aiDockerClient with just 2 dependencies.

Compatibility

This client is compatible with Crawl4AI Docker server v0.8.x+. The config classes (BrowserConfig, CrawlerRunConfig) produce the same serialized format as the full library.

License

Apache 2.0 — based on crawl4ai by unclecode.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl4ai_client-0.1.0.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawl4ai_client-0.1.0-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file crawl4ai_client-0.1.0.tar.gz.

File metadata

  • Download URL: crawl4ai_client-0.1.0.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.8

File hashes

Hashes for crawl4ai_client-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e793f7cf16cb5aeb4637bd95c454dd8e70507e0de68af1e43c9a6807cf747840
MD5 fd503fd3ff8fff9642e809a0d4bc61b3
BLAKE2b-256 a7d78c4924699af360516f9dcd55b39491e9b63f48087f05f8937dfbe7e20505

See more details on using hashes here.

File details

Details for the file crawl4ai_client-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for crawl4ai_client-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 48c496d66a7003c6dccaf1719929af4729251356d73bf40fde1ea5c192b7c224
MD5 746464b3a137e88aa3be9fde5adbb303
BLAKE2b-256 2cb119257c56f4be444dc3fd23ad66db38b97640102639a44fc4c8c455993aa4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page