Skip to main content

A Model Context Protocol server for web crawling using Crawl4ai

Project description

Crawl4AI MCP Server

A Model Context Protocol server for web crawling using the Crawl4ai library.

📋 Overview

Crawl4AI MCP Server provides a set of tools and prompts for web crawling through the Model Context Protocol (MCP). It allows AI assistants to autonomously crawl websites, extract content, and save information as Markdown files.

✨ Features

  • 🕸️ Single Page Crawling: Extract content from a single webpage in Markdown format
  • 🌐 Deep Website Crawling: Crawl multiple pages of a website with configurable depth and limits
  • 🔍 Structured Data Extraction: Use CSS selectors to extract specific structured data from webpages
  • 💾 Markdown Export: Save crawled content directly as Markdown files

🚀 Installation

pip install crawl4ai-mcp-server

🛠️ Usage

Command Line

Run the server directly from the command line:

crawl4ai-mcp

Python API

import asyncio
from crawl4ai_mcp import serve

# Run the server
asyncio.run(serve())

📝 Available Tools

crawl_webpage

Crawls a single webpage and returns its content as markdown.

Parameters:

  • url (string, required): URL to crawl
  • include_images (boolean, optional): Whether to include images in the result (default: true)
  • bypass_cache (boolean, optional): Whether to bypass cache (default: false)

crawl_website

Crawls a website starting from the given URL, with specified depth and page limit.

Parameters:

  • url (string, required): Starting URL
  • max_depth (integer, optional): Maximum crawl depth (default: 1)
  • max_pages (integer, optional): Maximum number of pages to crawl (default: 5)
  • include_images (boolean, optional): Whether to include images (default: true)

extract_structured_data

Extracts structured data from a webpage using CSS selectors.

Parameters:

  • url (string, required): URL to extract data from
  • schema (object, optional): Schema defining what to extract
  • css_selector (string, optional): CSS selector to locate specific parts of the page (default: "body")

save_as_markdown

Crawls a webpage and saves the content as a Markdown file.

Parameters:

  • url (string, required): URL to crawl
  • filename (string, required): Filename to save the Markdown
  • include_images (boolean, optional): Whether to include images (default: true)

🔌 Available Prompts

crawl

Crawls a webpage and retrieves its content.

Arguments:

  • url (required): URL to crawl

save_page

Crawls a webpage and saves it as a Markdown file.

Arguments:

  • url (required): URL to crawl
  • filename (required): Filename to save the Markdown

🧩 Requirements

  • Python 3.8+
  • mcp>=1.0.0
  • crawl4ai
  • pydantic

📄 License

MIT License - see the LICENSE file for details.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl4ai_mcp_server-0.1.3.tar.gz (14.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawl4ai_mcp_server-0.1.3-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file crawl4ai_mcp_server-0.1.3.tar.gz.

File metadata

  • Download URL: crawl4ai_mcp_server-0.1.3.tar.gz
  • Upload date:
  • Size: 14.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for crawl4ai_mcp_server-0.1.3.tar.gz
Algorithm Hash digest
SHA256 259dcf6c4820dd636d95d69111fda57580e16fb5eb3296b45ea14fd73294122f
MD5 117fad2ef7e950e66fb842b6bf699cff
BLAKE2b-256 ac3a27a9255d91b215f13c04f1df26da143eb4211b510c22451e187f643c05cf

See more details on using hashes here.

File details

Details for the file crawl4ai_mcp_server-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for crawl4ai_mcp_server-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9a2787398695d348a31456e125339947a62f50eb618a5ebf4060878258ca173f
MD5 fcc8346edc1fb104d845f6e05316a9ad
BLAKE2b-256 c40998a7bb359ce804a8bb370eb77f5b785e7b8530cc824f630d8fa08b6f4925

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page