Skip to main content

๐Ÿ”ฅ๐Ÿ•ท๏ธ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper

Project description

Crawl4AI 0.3.0 Async Version ๐Ÿ•ท๏ธ๐Ÿค–

GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests License

Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. ๐Ÿ†“๐ŸŒ

Looking for the synchronous version? Check out README.sync.md.

Try it Now!

โœจ Play around with this Open In Colab

โœจ Visit our Documentation Website

โœจ Check out the Demo

Features โœจ

  • ๐Ÿ†“ Completely free and open-source
  • ๐Ÿš€ Blazing fast performance, outperforming many paid services
  • ๐Ÿค– LLM-friendly output formats (JSON, cleaned HTML, markdown)
  • ๐ŸŒ Supports crawling multiple URLs simultaneously
  • ๐ŸŽจ Extracts and returns all media tags (Images, Audio, and Video)
  • ๐Ÿ”— Extracts all external and internal links
  • ๐Ÿ“š Extracts metadata from the page
  • ๐Ÿ”„ Custom hooks for authentication, headers, and page modifications before crawling
  • ๐Ÿ•ต๏ธ User-agent customization
  • ๐Ÿ–ผ๏ธ Takes screenshots of the page
  • ๐Ÿ“œ Executes multiple custom JavaScripts before crawling
  • ๐Ÿ“Š Generates structured output without LLM using JsonCssExtractionStrategy
  • ๐Ÿ“š Various chunking strategies: topic-based, regex, sentence, and more
  • ๐Ÿง  Advanced extraction strategies: cosine clustering, LLM, and more
  • ๐ŸŽฏ CSS selector support for precise data extraction
  • ๐Ÿ“ Passes instructions/keywords to refine extraction
  • ๐Ÿ”’ Proxy support for enhanced privacy and access
  • ๐Ÿ”„ Session management for complex multi-page crawling scenarios
  • ๐ŸŒ Asynchronous architecture for improved performance and scalability

Installation ๐Ÿ› ๏ธ

Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.

Using pip ๐Ÿ

Choose the installation option that best fits your needs:

Basic Installation

For basic web crawling and scraping tasks:

pip install crawl4ai

Installation with PyTorch

For advanced text clustering (includes CosineSimilarity cluster strategy):

pip install crawl4ai[torch]

Installation with Transformers

For text summarization and Hugging Face models:

pip install crawl4ai[transformer]

Installation with Synchronous Version

If you need the synchronous version using Selenium:

pip install crawl4ai[sync]

Installation with Cosine Similarity

For using the cosine similarity strategy:

pip install crawl4ai[cosine]

Full Installation

For all features:

pip install crawl4ai[all]

After installation, run the following command to install Playwright dependencies:

playwright install

If you've installed the "torch", "transformer", or "all" options, it's recommended to run:

crawl4ai-download-models

Using Docker ๐Ÿณ

# For Mac users (M1/M2)
docker build --platform linux/amd64 -t crawl4ai .
# For other users
docker build -t crawl4ai .
docker run -d -p 8000:80 crawl4ai

Using Docker Hub ๐Ÿณ

docker pull unclecode/crawl4ai:latest
docker run -d -p 8000:80 unclecode/crawl4ai:latest

For more detailed installation instructions and options, please refer to our Installation Guide.

Quick Start ๐Ÿš€

import asyncio
from crawl4ai import AsyncWebCrawler

async def main():
    async with AsyncWebCrawler(verbose=True) as crawler:
        result = await crawler.arun(url="https://www.nbcnews.com/business")
        print(result.markdown)

if __name__ == "__main__":
    asyncio.run(main())

Advanced Usage ๐Ÿ”ฌ

Executing JavaScript and Using CSS Selectors

import asyncio
from crawl4ai import AsyncWebCrawler

async def main():
    async with AsyncWebCrawler(verbose=True) as crawler:
        js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]
        result = await crawler.arun(
            url="https://www.nbcnews.com/business",
            js_code=js_code,
            css_selector="article.tease-card",
            bypass_cache=True
        )
        print(result.extracted_content)

if __name__ == "__main__":
    asyncio.run(main())

Using a Proxy

import asyncio
from crawl4ai import AsyncWebCrawler

async def main():
    async with AsyncWebCrawler(verbose=True, proxy="http://127.0.0.1:7890") as crawler:
        result = await crawler.arun(
            url="https://www.nbcnews.com/business",
            bypass_cache=True
        )
        print(result.markdown)

if __name__ == "__main__":
    asyncio.run(main())

Extracting Structured Data with OpenAI

import os
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel, Field

class OpenAIModelFee(BaseModel):
    model_name: str = Field(..., description="Name of the OpenAI model.")
    input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
    output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")

async def main():
    async with AsyncWebCrawler(verbose=True) as crawler:
        result = await crawler.arun(
            url='https://openai.com/api/pricing/',
            word_count_threshold=1,
            extraction_strategy=LLMExtractionStrategy(
                provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'), 
                schema=OpenAIModelFee.schema(),
                extraction_type="schema",
                instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens. 
                Do not miss any models in the entire content. One extracted model JSON format should look like this: 
                {"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
            ),            
            bypass_cache=True,
        )
        print(result.extracted_content)

if __name__ == "__main__":
    asyncio.run(main())

Advanced Multi-Page Crawling with JavaScript Execution

Crawl4AI excels at handling complex scenarios, such as crawling multiple pages with dynamic content loaded via JavaScript. Here's an example of crawling GitHub commits across multiple pages:

import asyncio
import re
from bs4 import BeautifulSoup
from crawl4ai import AsyncWebCrawler

async def crawl_typescript_commits():
    first_commit = ""
    async def on_execution_started(page):
        nonlocal first_commit 
        try:
            while True:
                await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')
                commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')
                commit = await commit.evaluate('(element) => element.textContent')
                commit = re.sub(r'\s+', '', commit)
                if commit and commit != first_commit:
                    first_commit = commit
                    break
                await asyncio.sleep(0.5)
        except Exception as e:
            print(f"Warning: New content didn't appear after JavaScript execution: {e}")

    async with AsyncWebCrawler(verbose=True) as crawler:
        crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)

        url = "https://github.com/microsoft/TypeScript/commits/main"
        session_id = "typescript_commits_session"
        all_commits = []

        js_next_page = """
        const button = document.querySelector('a[data-testid="pagination-next-button"]');
        if (button) button.click();
        """

        for page in range(3):  # Crawl 3 pages
            result = await crawler.arun(
                url=url,
                session_id=session_id,
                css_selector="li.Box-sc-g0xbh4-0",
                js=js_next_page if page > 0 else None,
                bypass_cache=True,
                js_only=page > 0
            )

            assert result.success, f"Failed to crawl page {page + 1}"

            soup = BeautifulSoup(result.cleaned_html, 'html.parser')
            commits = soup.select("li")
            all_commits.extend(commits)

            print(f"Page {page + 1}: Found {len(commits)} commits")

        await crawler.crawler_strategy.kill_session(session_id)
        print(f"Successfully crawled {len(all_commits)} commits across 3 pages")

if __name__ == "__main__":
    asyncio.run(crawl_typescript_commits())

This example demonstrates Crawl4AI's ability to handle complex scenarios where content is loaded asynchronously. It crawls multiple pages of GitHub commits, executing JavaScript to load new content and using custom hooks to ensure data is loaded before proceeding.

Using JsonCssExtractionStrategy

The JsonCssExtractionStrategy allows for precise extraction of structured data from web pages using CSS selectors.

import asyncio
import json
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy

async def extract_news_teasers():
    schema = {
        "name": "News Teaser Extractor",
        "baseSelector": ".wide-tease-item__wrapper",
        "fields": [
            {
                "name": "category",
                "selector": ".unibrow span[data-testid='unibrow-text']",
                "type": "text",
            },
            {
                "name": "headline",
                "selector": ".wide-tease-item__headline",
                "type": "text",
            },
            {
                "name": "summary",
                "selector": ".wide-tease-item__description",
                "type": "text",
            },
            {
                "name": "time",
                "selector": "[data-testid='wide-tease-date']",
                "type": "text",
            },
            {
                "name": "image",
                "type": "nested",
                "selector": "picture.teasePicture img",
                "fields": [
                    {"name": "src", "type": "attribute", "attribute": "src"},
                    {"name": "alt", "type": "attribute", "attribute": "alt"},
                ],
            },
            {
                "name": "link",
                "selector": "a[href]",
                "type": "attribute",
                "attribute": "href",
            },
        ],
    }

    extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)

    async with AsyncWebCrawler(verbose=True) as crawler:
        result = await crawler.arun(
            url="https://www.nbcnews.com/business",
            extraction_strategy=extraction_strategy,
            bypass_cache=True,
        )

        assert result.success, "Failed to crawl the page"

        news_teasers = json.loads(result.extracted_content)
        print(f"Successfully extracted {len(news_teasers)} news teasers")
        print(json.dumps(news_teasers[0], indent=2))

if __name__ == "__main__":
    asyncio.run(extract_news_teasers())

Speed Comparison ๐Ÿš€

Crawl4AI is designed with speed as a primary focus. Our goal is to provide the fastest possible response with high-quality data extraction, minimizing abstractions between the data and the user.

We've conducted a speed comparison between Crawl4AI and Firecrawl, a paid service. The results demonstrate Crawl4AI's superior performance:

Firecrawl:
Time taken: 7.02 seconds
Content length: 42074 characters
Images found: 49

Crawl4AI (simple crawl):
Time taken: 1.60 seconds
Content length: 18238 characters
Images found: 49

Crawl4AI (with JavaScript execution):
Time taken: 4.64 seconds
Content length: 40869 characters
Images found: 89

As you can see, Crawl4AI outperforms Firecrawl significantly:

  • Simple crawl: Crawl4AI is over 4 times faster than Firecrawl.
  • With JavaScript execution: Even when executing JavaScript to load more content (doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.

You can find the full comparison code in our repository at docs/examples/crawl4ai_vs_firecrawl.py.

Documentation ๐Ÿ“š

For detailed documentation, including installation instructions, advanced features, and API reference, visit our Documentation Website.

Contributing ๐Ÿค

We welcome contributions from the open-source community. Check out our contribution guidelines for more information.

License ๐Ÿ“„

Crawl4AI is released under the Apache 2.0 License.

Contact ๐Ÿ“ง

For questions, suggestions, or feedback, feel free to reach out:

Happy Crawling! ๐Ÿ•ธ๏ธ๐Ÿš€

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl4ai-0.3.0.tar.gz (61.2 kB view hashes)

Uploaded Source

Built Distribution

Crawl4AI-0.3.0-py3-none-any.whl (61.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page