Skip to main content

A Model Context Protocol server for web crawling using Crawl4ai

Project description

Crawl4AI MCP Server

A Model Context Protocol server for web crawling using the Crawl4ai library.

📋 Overview

Crawl4AI MCP Server provides a set of tools and prompts for web crawling through the Model Context Protocol (MCP). It allows AI assistants to autonomously crawl websites, extract content, and save information as Markdown files.

✨ Features

  • 🕸️ Single Page Crawling: Extract content from a single webpage in Markdown format
  • 🌐 Deep Website Crawling: Crawl multiple pages of a website with configurable depth and limits
  • 🔍 Structured Data Extraction: Use CSS selectors to extract specific structured data from webpages
  • 💾 Markdown Export: Save crawled content directly as Markdown files

🚀 Installation

pip install crawl4ai-mcp-server

🛠️ Usage

Command Line

Run the server directly from the command line:

crawl4ai-mcp

Python API

import asyncio
from crawl4ai_mcp import serve

# Run the server
asyncio.run(serve())

📝 Available Tools

crawl_webpage

Crawls a single webpage and returns its content as markdown.

Parameters:

  • url (string, required): URL to crawl
  • include_images (boolean, optional): Whether to include images in the result (default: true)
  • bypass_cache (boolean, optional): Whether to bypass cache (default: false)

crawl_website

Crawls a website starting from the given URL, with specified depth and page limit.

Parameters:

  • url (string, required): Starting URL
  • max_depth (integer, optional): Maximum crawl depth (default: 1)
  • max_pages (integer, optional): Maximum number of pages to crawl (default: 5)
  • include_images (boolean, optional): Whether to include images (default: true)

extract_structured_data

Extracts structured data from a webpage using CSS selectors.

Parameters:

  • url (string, required): URL to extract data from
  • schema (object, optional): Schema defining what to extract
  • css_selector (string, optional): CSS selector to locate specific parts of the page (default: "body")

save_as_markdown

Crawls a webpage and saves the content as a Markdown file.

Parameters:

  • url (string, required): URL to crawl
  • filename (string, required): Filename to save the Markdown
  • include_images (boolean, optional): Whether to include images (default: true)

🔌 Available Prompts

crawl

Crawls a webpage and retrieves its content.

Arguments:

  • url (required): URL to crawl

save_page

Crawls a webpage and saves it as a Markdown file.

Arguments:

  • url (required): URL to crawl
  • filename (required): Filename to save the Markdown

🧩 Requirements

  • Python 3.8+
  • mcp>=1.0.0
  • crawl4ai
  • pydantic

📄 License

MIT License - see the LICENSE file for details.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl4ai_mcp_server-0.1.0.tar.gz (11.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawl4ai_mcp_server-0.1.0-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file crawl4ai_mcp_server-0.1.0.tar.gz.

File metadata

  • Download URL: crawl4ai_mcp_server-0.1.0.tar.gz
  • Upload date:
  • Size: 11.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for crawl4ai_mcp_server-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b4430543c4e841ca8177765b5819ec59d2476108fff2ba7e666635171cc724a3
MD5 1ab7056f1c68447b2aa62c7b7986e27c
BLAKE2b-256 c2aa29bbe8937036b967f66115b711a5dbe86c448459a8992cb81667867a9341

See more details on using hashes here.

File details

Details for the file crawl4ai_mcp_server-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for crawl4ai_mcp_server-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2c444a4f7d97438a637e1edd8e027172cb2114ce63f4bd5b0df435e3396df3a8
MD5 ec09c55d17e06946b4a7134e0898aadc
BLAKE2b-256 8288da97b57d8eaa6a9869fabb222da1ae52dc6a09bca045788766e76f5e6c9a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page