Open-source MCP Server for web search, extract, crawl, academic research, and library docs with embedded SearXNG
Project description
WET - Web Extended Toolkit MCP Server
Open-source MCP Server for web search, content extraction, library docs & multimodal analysis.
Features
- Web Search - Search via embedded SearXNG (metasearch: Google, Bing, DuckDuckGo, Brave)
- Academic Research - Search Google Scholar, Semantic Scholar, arXiv, PubMed, CrossRef, BASE
- Library Docs - Auto-discover and index documentation with FTS5 hybrid search
- Content Extract - Extract clean content (Markdown/Text)
- Deep Crawl - Crawl multiple pages from a root URL with depth control
- Site Map - Discover website URL structure
- Media - List and download images, videos, audio files
- Anti-bot - Stealth mode bypasses Cloudflare, Medium, LinkedIn, Twitter
- Local Cache - TTL-based caching for all web operations
- Docs Sync - Sync indexed docs across machines via rclone
Quick Start
Prerequisites
- Python 3.13 (required -- Python 3.14+ is not supported due to SearXNG incompatibility)
Warning: You must specify
--python 3.13when usinguvx. Without it,uvxmay pick Python 3.14+ which causes SearXNG search to fail silently.
On first run, the server automatically installs SearXNG, Playwright chromium, and starts the embedded search engine.
Option 1: Minimal uvx (Recommended)
{
"mcpServers": {
"wet": {
"command": "uvx",
"args": ["--python", "3.13", "wet-mcp@latest"]
// No API keys needed -- local Qwen3-Embedding-0.6B + Qwen3-Reranker-0.6B (ONNX, CPU)
// First run downloads ~570MB model, cached for subsequent runs
}
}
}
Option 2: Minimal Docker
{
"mcpServers": {
"wet": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--name", "mcp-wet",
"-v", "wet-data:/data",
"n24q02m/wet-mcp:latest"
]
// Volume persists cached web pages, indexed docs, and downloads
// Same built-in local embedding + reranking as uvx
}
}
}
Option 3: Full uvx
{
"mcpServers": {
"wet": {
"command": "uvx",
"args": ["--python", "3.13", "wet-mcp@latest"],
"env": {
"API_KEYS": "GOOGLE_API_KEY:AIza...", // cloud embedding (Gemini > OpenAI > Mistral > Cohere) + media analysis
// Reranking: auto local Qwen3-Reranker-0.6B. Or set RERANK_MODEL=cohere/rerank-v3.5 for cloud reranking
"GITHUB_TOKEN": "ghp_...", // higher rate limits for docs discovery
"SYNC_ENABLED": "true", // enable docs sync
"SYNC_REMOTE": "gdrive", // rclone remote name
"SYNC_INTERVAL": "300", // auto-sync every 5min (0 = manual)
"RCLONE_CONFIG_GDRIVE_TYPE": "drive",
"RCLONE_CONFIG_GDRIVE_TOKEN": "<base64>" // from: uvx --python 3.13 wet-mcp setup-sync drive
}
}
}
}
Option 4: Full Docker
{
"mcpServers": {
"wet": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--name", "mcp-wet",
"-v", "wet-data:/data",
"-e", "API_KEYS",
"-e", "GITHUB_TOKEN",
"-e", "SYNC_ENABLED",
"-e", "SYNC_REMOTE",
"-e", "SYNC_INTERVAL",
"-e", "RCLONE_CONFIG_GDRIVE_TYPE",
"-e", "RCLONE_CONFIG_GDRIVE_TOKEN",
"n24q02m/wet-mcp:latest"
],
"env": {
"API_KEYS": "GOOGLE_API_KEY:AIza...",
"GITHUB_TOKEN": "ghp_...",
"SYNC_ENABLED": "true",
"SYNC_REMOTE": "gdrive",
"SYNC_INTERVAL": "300",
"RCLONE_CONFIG_GDRIVE_TYPE": "drive",
"RCLONE_CONFIG_GDRIVE_TOKEN": "<base64>"
}
// Same auto-detection: cloud embedding from API_KEYS, auto local reranking
}
}
}
Sync setup (one-time)
# Google Drive
uvx --python 3.13 wet-mcp setup-sync drive
# Other providers (any rclone remote type)
uvx --python 3.13 wet-mcp setup-sync dropbox
uvx --python 3.13 wet-mcp setup-sync onedrive
uvx --python 3.13 wet-mcp setup-sync s3
Opens a browser for OAuth and outputs env vars (RCLONE_CONFIG_*) to set. Both raw JSON and base64 tokens are supported.
Tools
| Tool | Actions | Description |
|---|---|---|
search |
search, research, docs | Web search, academic research, library documentation |
extract |
extract, crawl, map | Content extraction, deep crawling, site mapping |
media |
list, download, analyze | Media discovery & download |
config |
status, set, cache_clear, docs_reindex | Server configuration and cache management |
help |
- | Full documentation for any tool |
Usage Examples
// search tool
{"action": "search", "query": "python web scraping", "max_results": 10}
{"action": "research", "query": "transformer attention mechanism"}
{"action": "docs", "query": "how to create routes", "library": "fastapi"}
{"action": "docs", "query": "dependency injection", "library": "spring-boot", "language": "java"}
// extract tool
{"action": "extract", "urls": ["https://example.com"]}
{"action": "crawl", "urls": ["https://docs.python.org"], "depth": 2}
{"action": "map", "urls": ["https://example.com"]}
// media tool
{"action": "list", "url": "https://github.com/python/cpython"}
{"action": "download", "media_urls": ["https://example.com/image.png"]}
Configuration
| Variable | Default | Description |
|---|---|---|
WET_AUTO_SEARXNG |
true |
Auto-start embedded SearXNG subprocess |
WET_SEARXNG_PORT |
41592 |
SearXNG port (optional) |
SEARXNG_URL |
http://localhost:41592 |
External SearXNG URL (optional, when auto disabled) |
SEARXNG_TIMEOUT |
30 |
SearXNG request timeout in seconds (optional) |
API_KEYS |
- | LLM API keys (optional, format: ENV_VAR:key,...) |
LLM_MODELS |
gemini/gemini-3-flash-preview |
LiteLLM model for media analysis (optional) |
EMBEDDING_BACKEND |
(auto-detect) | litellm (cloud API) or local (Qwen3). Auto: API_KEYS -> litellm, else local (always available) |
EMBEDDING_MODEL |
(auto-detect) | LiteLLM embedding model (optional) |
EMBEDDING_DIMS |
0 (auto=768) |
Embedding dimensions (optional) |
RERANK_ENABLED |
true |
Enable reranking after search |
RERANK_BACKEND |
(auto-detect) | litellm or local. Auto: Cohere key in API_KEYS -> litellm, else local |
RERANK_MODEL |
(auto-detect) | LiteLLM rerank model (auto: cohere/rerank-v3.5 if Cohere key in API_KEYS) |
RERANK_TOP_N |
10 |
Return top N results after reranking |
CACHE_DIR |
~/.wet-mcp |
Data directory for cache DB, docs DB, downloads (optional) |
DOCS_DB_PATH |
~/.wet-mcp/docs.db |
Docs database location (optional) |
DOWNLOAD_DIR |
~/.wet-mcp/downloads |
Media download directory (optional) |
TOOL_TIMEOUT |
120 |
Tool execution timeout in seconds, 0=no timeout (optional) |
WET_CACHE |
true |
Enable/disable web cache (optional) |
GITHUB_TOKEN |
- | GitHub personal access token for library discovery (optional, increases rate limit from 60 to 5000 req/hr) |
SYNC_ENABLED |
false |
Enable rclone sync |
SYNC_REMOTE |
- | rclone remote name (required when sync enabled) |
SYNC_FOLDER |
wet-mcp |
Remote folder name (optional) |
SYNC_INTERVAL |
0 |
Auto-sync interval in seconds, 0=manual (optional) |
LOG_LEVEL |
INFO |
Logging level (optional) |
Embedding & Reranking
Both embedding and reranking are always available — local models are built-in and require no configuration.
- Embedding: Default local Qwen3-Embedding-0.6B. Set
API_KEYSto upgrade to cloud (Gemini > OpenAI > Mistral > Cohere), with automatic local fallback if cloud fails. - Reranking: Default local Qwen3-Reranker-0.6B. If
COHERE_API_KEYis present inAPI_KEYS, auto-upgrades to cloudcohere/rerank-v3.5. - GPU auto-detection: If GPU is available (CUDA/DirectML) and
llama-cpp-pythonis installed, automatically uses GGUF models (~480MB) instead of ONNX (~570MB) for better performance. - All embeddings stored at 768 dims (default). Switching providers never breaks the vector table.
- Override with
EMBEDDING_BACKEND=localto force local even with API keys.
API_KEYS supports multiple providers in a single string:
API_KEYS=GOOGLE_API_KEY:AIza...,OPENAI_API_KEY:sk-...,COHERE_API_KEY:co-...
LLM Configuration (Optional)
For media analysis, configure API keys:
API_KEYS=GOOGLE_API_KEY:AIza...
LLM_MODELS=gemini/gemini-3-flash-preview
Architecture
┌─────────────────────────────────────────────────────────┐
│ MCP Client │
│ (Claude, Cursor, Windsurf) │
└─────────────────────┬───────────────────────────────────┘
│ MCP Protocol
v
┌─────────────────────────────────────────────────────────┐
│ WET MCP Server │
│ ┌──────────┐ ┌──────────┐ ┌───────┐ ┌────────┐ │
│ │ search │ │ extract │ │ media │ │ config │ │
│ │ (search, │ │(extract, │ │(list, │ │(status,│ │
│ │ research,│ │ crawl, │ │downld,│ │ set, │ │
│ │ docs) │ │ map) │ │analyz)│ │ cache) │ │
│ └──┬───┬───┘ └────┬─────┘ └──┬────┘ └────────┘ │
│ │ │ │ │ + help tool │
│ v v v v │
│ ┌──────┐ ┌──────┐ ┌──────────┐ ┌──────────┐ │
│ │SearX │ │DocsDB│ │ Crawl4AI │ │ Reranker │ │
│ │NG │ │FTS5+ │ │(Playwrgt)│ │(LiteLLM/ │ │
│ │ │ │sqlite│ │ │ │ Qwen3 │ │
│ │ │ │-vec │ │ │ │ local) │ │
│ └──────┘ └──────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ WebCache (SQLite, TTL) │ rclone sync (docs) │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Build from Source
git clone https://github.com/n24q02m/wet-mcp
cd wet-mcp
# Setup (requires mise: https://mise.jdx.dev/)
mise run setup
# Run
uv run wet-mcp
Docker Build
docker build -t n24q02m/wet-mcp:latest .
Requirements: Python 3.13 (not 3.14+)
Contributing
See CONTRIBUTING.md
License
MIT - See LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wet_mcp-2.6.1.tar.gz.
File metadata
- Download URL: wet_mcp-2.6.1.tar.gz
- Upload date:
- Size: 89.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b57a86440437292670082a3eda0aa96365ec1f9577ff99e06e7a79b3fc03155f
|
|
| MD5 |
d06f79bcc27a91453ba7f9a900f41de3
|
|
| BLAKE2b-256 |
8ec4a1134b9d1a6cd1b954450cfaaf2bf1ab5e10b9faf24a775008a28b2fe1d2
|
File details
Details for the file wet_mcp-2.6.1-py3-none-any.whl.
File metadata
- Download URL: wet_mcp-2.6.1-py3-none-any.whl
- Upload date:
- Size: 101.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5789fd538091863c7d413241d3721642cb4bed60840209a26092ea9bcd2d4f5f
|
|
| MD5 |
391b9dea33cf7e4a3b7044ef86d21885
|
|
| BLAKE2b-256 |
88b078a40c44652d9452a4b58a59d3047afc0551e8725100d9f730e4260b9779
|