MCP server for Evomi web scraping API
Project description
Evomi MCP Server
A Model Context Protocol (MCP) server for the Evomi web scraping API. Use this to give AI assistants like Claude the ability to scrape websites, crawl pages, extract data, and more.
Features
- Single Page Scraping - Scrape any URL with automatic JavaScript detection
- Website Crawling - Multi-page crawling with depth control
- URL Discovery - Find URLs via sitemaps, CommonCrawl, or in-site crawling
- Domain Search - Find domains by searching the web
- AI-Powered Extraction - Use AI to extract structured data from pages
- Conversational Agent - Natural language interface for scraping tasks
- Config Management - Save and reuse scraping configurations
- Schema Management - Define and test extraction schemas
- Storage Configuration - Manage cloud storage for scraped data
- Scheduled Jobs - Automate scraping on a schedule
Installation
Using pip
pip install evomi-mcp
From Source
cd evomi-mcp
pip install -e .
Configuration
Set your Evomi API key as an environment variable:
export EVOMI_API_KEY="your-api-key-here"
Optionally, you can also set a custom base URL:
export EVOMI_BASE_URL="https://scrape.evomi.com" # default
Usage with Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"evomi": {
"command": "evomi-mcp",
"env": {
"EVOMI_API_KEY": "your-api-key-here"
}
}
}
}
Or if installed from source:
{
"mcpServers": {
"evomi": {
"command": "python",
"args": ["-m", "evomi_mcp.server"],
"env": {
"EVOMI_API_KEY": "your-api-key-here"
}
}
}
}
Available Tools (25 Total)
Scraping Operations (6 tools)
| Tool | Description |
|---|---|
scrape_url |
Scrape a single URL with configurable options |
crawl_website |
Crawl a website to discover and scrape multiple pages |
map_website |
Discover URLs from a website |
search_domains |
Find domains by searching the web |
agent_request |
AI-powered conversational scraping assistant |
get_task_status |
Check the status of an async task |
Config Management (6 tools)
| Tool | Description |
|---|---|
list_configs |
List all saved scrape configurations |
create_config |
Create a new scrape configuration |
get_config |
Get a saved scrape configuration by ID |
update_config |
Update an existing scrape configuration |
delete_config |
Delete a scrape configuration |
generate_config |
Generate a scrape config from natural language using AI |
Schema Management (6 tools)
| Tool | Description |
|---|---|
list_schemas |
List all saved extraction schemas |
create_schema |
Create a new extraction schema |
get_schema |
Get a saved extraction schema by ID |
update_schema |
Update an existing extraction schema |
delete_schema |
Delete an extraction schema |
get_schema_status |
Get the test status of an extraction schema |
Storage Management (4 tools)
| Tool | Description |
|---|---|
list_storage_configs |
List all storage configurations |
create_storage_config |
Create a new storage configuration |
update_storage_config |
Update an existing storage configuration |
delete_storage_config |
Delete a storage configuration |
Schedule Management (7 tools)
| Tool | Description |
|---|---|
list_schedules |
List all scheduled scrape jobs |
create_schedule |
Create a new scheduled scrape job |
get_schedule |
Get a scheduled job by ID |
update_schedule |
Update an existing scheduled job |
delete_schedule |
Delete a scheduled job |
toggle_schedule |
Toggle a scheduled job active/inactive |
list_schedule_runs |
Get execution history for a scheduled job |
Account (1 tool)
| Tool | Description |
|---|---|
get_account_info |
Get account information including credit balance |
Tool Examples
Scraping
// Basic scrape
{"url": "https://example.com"}
// AI extraction
{"url": "https://example.com/products", "ai_enhance": true, "ai_prompt": "Extract product names and prices"}
// Browser mode with actions
{"url": "https://example.com", "mode": "browser", "js_instructions": [{"click": ".accept-cookies"}, {"wait": 1000}]}
Crawling
// Basic crawl
{"domain": "example.com", "max_urls": 50}
// With URL filter
{"domain": "example.com", "url_pattern": "/products/", "depth": 3}
Domain Search
// Find domains
{"query": "best e-commerce sites for electronics", "max_urls": 20, "region": "us-en"}
Config Management
// Create config
{"name": "Product Scraper", "config": {"mode": "browser", "output": "markdown"}}
// Generate config with AI
{"name": "Amazon Scraper", "prompt": "Scrape product title, price, and reviews from Amazon"}
Scheduling
// Create daily schedule
{"name": "Daily Prices", "config_id": "cfg_abc123", "interval_minutes": 1440, "start_time": "09:00"}
Pricing & Credits
All operations consume credits based on:
- Base cost: 1 credit per request
- Browser mode: 5x multiplier
- Residential proxy: 2x multiplier
- AI enhancement: +30 credits
- Screenshot/PDF: +1 credit each
Credit information is returned in each response.
Development
Setup
cd evomi-mcp
pip install -e ".[dev]"
Running the Server Directly
evomi-mcp
# or
python -m evomi_mcp.server
Links
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file evomi_mcp-1.0.0.tar.gz.
File metadata
- Download URL: evomi_mcp-1.0.0.tar.gz
- Upload date:
- Size: 11.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
228f24fce187098c6314dffb6571616bc7a2468ce498e92d1cc474156a25936d
|
|
| MD5 |
a990053f9ce4a8293d9af3f5d5b36681
|
|
| BLAKE2b-256 |
66dfc8e49fb2a41a7a0899a7b29065e5018ac15715b3339e227469b026ad75d7
|
File details
Details for the file evomi_mcp-1.0.0-py3-none-any.whl.
File metadata
- Download URL: evomi_mcp-1.0.0-py3-none-any.whl
- Upload date:
- Size: 13.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0333f51c7d84391fccd3f90af7ea3d0f5b8b7605f091bbd70fddc09ee85a9baf
|
|
| MD5 |
b99915f4e71bdf11e971af2ce1f40131
|
|
| BLAKE2b-256 |
db345d048ee7f042452832a647915ade7b296ce0522275b0fc335e9b0dcb2977
|