Open-source MCP Server for web search, extract, crawl, academic research, and library docs with embedded SearXNG
Project description
WET - Web Extended Toolkit MCP Server
mcp-name: io.github.n24q02m/wet-mcp
Open-source MCP Server for web search, content extraction, library docs & multimodal analysis.
Features
- Web Search - Search via embedded SearXNG (metasearch: Google, Bing, DuckDuckGo, Brave) with search filters (time range, language, include/exclude domains)
- Search Reranking - Semantic reranking for better relevance (Jina AI, Cohere, or local Qwen3)
- Query Expansion - LLM-powered query expansion for broader coverage
- Find Similar - Discover pages similar to a given URL
- Snippet Enrichment - LLM-powered enrichment of search result snippets
- Academic Research - Search Google Scholar, Semantic Scholar, arXiv, PubMed, CrossRef, BASE
- Library Docs - Auto-discover and index documentation with FTS5 hybrid search, HyDE-enhanced retrieval, and version-specific docs discovery
- Content Extract - Extract clean content (Markdown/Text) with structured data extraction (LLM + JSON Schema)
- Batch Processing - Extract up to 50 URLs in one call with per-domain rate limiting
- Local File Conversion - Convert local files (PDF, DOCX, XLSX, CSV, HTML, EPUB, PPTX, etc.) to Markdown
- Deep Crawl - Crawl multiple pages from a root URL with depth control
- Site Map - Discover website URL structure
- Media - List and download images, videos, audio files
- Anti-bot - Stealth mode bypasses Cloudflare, Medium, LinkedIn, Twitter
- Local Cache - TTL-based caching for all web operations
- Docs Sync - Sync indexed docs across machines via rclone
Quick Start
Prerequisites
- Python 3.13 (required -- Python 3.14+ is not supported due to SearXNG incompatibility)
Warning: You must specify
--python 3.13when usinguvx. Without it,uvxmay pick Python 3.14+ which causes SearXNG search to fail silently.
On first run, the server automatically installs SearXNG, Playwright chromium, and starts the embedded search engine.
The recommended way to run this server is via uvx:
uvx --python 3.13 wet-mcp@latest
Alternatively, you can use
pipx run --python python3.13 wet-mcp.
Option 1: uvx (Recommended)
{
"mcpServers": {
"wet": {
"command": "uvx",
"args": ["--python", "3.13", "wet-mcp@latest"],
"env": {
// -- optional: LiteLLM Proxy (production, selfhosted gateway)
// "LITELLM_PROXY_URL": "http://10.0.0.20:4000",
// "LITELLM_PROXY_KEY": "sk-your-virtual-key",
// -- optional: cloud embedding + reranking + media analysis
// -- Jina AI (recommended): single key for both embedding and reranking
"API_KEYS": "JINA_AI_API_KEY:jina_...",
// -- or use other providers (Gemini, OpenAI, Cohere):
// "API_KEYS": "GOOGLE_API_KEY:AIza...,COHERE_API_KEY:co-...",
// -- without API_KEYS, uses built-in local Qwen3-Embedding-0.6B + Qwen3-Reranker-0.6B (ONNX, CPU)
// -- first run downloads ~570MB model, cached for subsequent runs
// -- optional: restrict local file conversion to specific directories
// "CONVERT_ALLOWED_DIRS": "/home/user/docs,/tmp/uploads",
// -- optional: higher rate limits for docs discovery (60 -> 5000 req/hr)
"GITHUB_TOKEN": "ghp_...",
// -- optional: sync indexed docs across machines via rclone
// -- on first sync, a browser opens for OAuth (auto, no manual setup)
"SYNC_ENABLED": "true", // optional, default: false
"SYNC_INTERVAL": "300" // optional, auto-sync every 5min (0 = manual only)
// "SYNC_REMOTE": "gdrive", // optional, default: gdrive
// "SYNC_PROVIDER": "drive", // optional, default: drive (Google Drive)
}
}
}
}
Option 2: Docker
{
"mcpServers": {
"wet": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--name", "mcp-wet",
"-v", "wet-data:/data", // persists cached web pages, indexed docs, and downloads
"-e", "LITELLM_PROXY_URL", // optional: pass-through from env below
"-e", "LITELLM_PROXY_KEY", // optional: pass-through from env below
"-e", "API_KEYS", // optional: pass-through from env below
"-e", "GITHUB_TOKEN", // optional: pass-through from env below
"-e", "SYNC_ENABLED", // optional: pass-through from env below
"-e", "SYNC_INTERVAL", // optional: pass-through from env below
"n24q02m/wet-mcp:latest"
],
"env": {
// -- optional: LiteLLM Proxy (production, selfhosted gateway)
// "LITELLM_PROXY_URL": "http://10.0.0.20:4000",
// "LITELLM_PROXY_KEY": "sk-your-virtual-key",
// -- optional: cloud embedding + reranking + media analysis
// -- Jina AI (recommended): single key for both embedding and reranking
"API_KEYS": "JINA_AI_API_KEY:jina_...",
// -- or: "API_KEYS": "GOOGLE_API_KEY:AIza...,COHERE_API_KEY:co-...",
// -- optional: higher rate limits for docs discovery (60 -> 5000 req/hr)
// -- auto-detected from `gh auth token` if GitHub CLI is installed
// "GITHUB_TOKEN": "ghp_...",
// -- optional: sync indexed docs across machines via rclone
"SYNC_ENABLED": "true", // optional, default: false
"SYNC_INTERVAL": "300" // optional, auto-sync every 5min (0 = manual only)
}
}
}
}
Pre-install (optional)
Pre-download all dependencies before adding to your MCP client config. This avoids slow first-run startup:
# Pre-download SearXNG, Playwright, embedding model (~570MB), and reranker model (~570MB)
uvx --python 3.13 wet-mcp warmup
# With cloud embedding (validates API key, skips local download if cloud works)
API_KEYS="GOOGLE_API_KEY:AIza..." uvx --python 3.13 wet-mcp warmup
Sync setup
Sync is fully automatic. Just set SYNC_ENABLED=true and the server handles everything:
- First sync: rclone is auto-downloaded, a browser opens for OAuth authentication
- Token saved: OAuth token is stored locally at
~/.wet-mcp/tokens/(600 permissions) - Subsequent runs: Token is loaded automatically — no manual steps needed
For non-Google Drive providers, set SYNC_PROVIDER and SYNC_REMOTE:
{
"SYNC_ENABLED": "true",
"SYNC_PROVIDER": "dropbox", // rclone provider type
"SYNC_REMOTE": "dropbox" // rclone remote name
}
Advanced: You can also run
uvx --python 3.13 wet-mcp setup-sync driveto pre-authenticate before first use, but this is optional.
Tools
| Tool | Actions | Description |
|---|---|---|
search |
search, research, docs, similar | Web search (with filters, reranking, expand/enrich flags), academic research, library docs (HyDE), find similar |
extract |
extract, batch, crawl, map, convert, extract_structured | Content extraction, batch processing (up to 50 URLs), deep crawling, site mapping, local file conversion, structured data extraction (JSON Schema) |
media |
list, download, analyze | Media discovery & download |
config |
status, set, cache_clear, docs_reindex | Server configuration and cache management |
help |
- | Full documentation for any tool |
Usage Examples
// search tool
{"action": "search", "query": "python web scraping", "max_results": 10}
{"action": "search", "query": "rust async", "time_range": "month", "language": "en", "include_domains": ["docs.rs"]}
{"action": "research", "query": "transformer attention mechanism"}
{"action": "docs", "query": "how to create routes", "library": "fastapi"}
{"action": "docs", "query": "dependency injection", "library": "spring-boot", "language": "java"}
{"action": "similar", "query": "https://fastapi.tiangolo.com/tutorial/first-steps/"}
{"action": "search", "query": "python async patterns", "expand": true}
{"action": "search", "query": "kubernetes networking", "enrich": true}
// extract tool
{"action": "extract", "urls": ["https://example.com"]}
{"action": "batch", "urls": ["https://a.com", "https://b.com", "https://c.com"]}
{"action": "extract_structured", "urls": ["https://example.com/product"], "schema": {"type": "object", "properties": {"name": {"type": "string"}, "price": {"type": "number"}}}}
{"action": "convert", "paths": ["/path/to/document.pdf"]}
{"action": "crawl", "urls": ["https://docs.python.org"], "depth": 2}
{"action": "map", "urls": ["https://example.com"]}
// media tool
{"action": "list", "url": "https://github.com/python/cpython"}
{"action": "download", "media_urls": ["https://example.com/image.png"]}
Configuration
| Variable | Default | Description |
|---|---|---|
WET_AUTO_SEARXNG |
true |
Auto-start embedded SearXNG subprocess |
WET_SEARXNG_PORT |
41592 |
SearXNG port (optional) |
SEARXNG_URL |
http://localhost:41592 |
External SearXNG URL (optional, when auto disabled) |
SEARXNG_TIMEOUT |
30 |
SearXNG request timeout in seconds (optional) |
LITELLM_PROXY_URL |
- | LiteLLM Proxy URL (e.g. http://10.0.0.20:4000). Enables proxy mode |
LITELLM_PROXY_KEY |
- | LiteLLM Proxy virtual key (e.g. sk-...) |
API_KEYS |
- | LLM API keys for SDK mode (format: ENV_VAR:key,...) |
LLM_MODELS |
gemini/gemini-3-flash-preview |
LiteLLM model for media analysis (optional) |
EMBEDDING_BACKEND |
(auto-detect) | litellm (cloud API) or local (Qwen3). Auto: API_KEYS -> litellm, else local (always available) |
EMBEDDING_MODEL |
(auto-detect) | LiteLLM embedding model (optional) |
EMBEDDING_DIMS |
0 (auto=768) |
Embedding dimensions (optional) |
RERANK_ENABLED |
true |
Enable reranking after search |
RERANK_BACKEND |
(auto-detect) | litellm or local. Auto: Cohere key in API_KEYS -> litellm, else local |
RERANK_MODEL |
(auto-detect) | LiteLLM rerank model (auto: cohere/rerank-multilingual-v3.0 if Cohere key in API_KEYS) |
RERANK_TOP_N |
10 |
Return top N results after reranking |
CONVERT_MAX_FILE_SIZE |
104857600 |
Max file size for local file conversion in bytes (default 100MB) |
CONVERT_ALLOWED_DIRS |
`` | Comma-separated absolute paths to restrict local file conversion (empty = allow all) |
CACHE_DIR |
~/.wet-mcp |
Data directory for cache DB, docs DB, downloads (optional) |
DOCS_DB_PATH |
~/.wet-mcp/docs.db |
Docs database location (optional) |
DOWNLOAD_DIR |
~/.wet-mcp/downloads |
Media download directory (optional) |
TOOL_TIMEOUT |
120 |
Tool execution timeout in seconds, 0=no timeout (optional) |
WET_CACHE |
true |
Enable/disable web cache (optional) |
GITHUB_TOKEN |
- | GitHub personal access token for library discovery (optional, increases rate limit from 60 to 5000 req/hr). Auto-detected from gh auth token if GitHub CLI is installed and authenticated — no need to set manually. |
SYNC_ENABLED |
false |
Enable rclone sync |
SYNC_PROVIDER |
drive |
rclone provider type (drive, dropbox, s3, etc.) |
SYNC_REMOTE |
gdrive |
rclone remote name |
SYNC_FOLDER |
wet-mcp |
Remote folder name |
SYNC_INTERVAL |
300 |
Auto-sync interval in seconds (0=manual) |
LOG_LEVEL |
INFO |
Logging level |
Embedding & Reranking
Both embedding and reranking are always available — local models are built-in and require no configuration.
- Jina AI (recommended): A single
JINA_AI_API_KEYenables both embedding (jina-embeddings-v5-text-small) and reranking (jina-reranker-v3). This is the highest-priority cloud provider. - Embedding priority: Jina AI > Gemini > OpenAI > Cohere. Default local Qwen3-Embedding-0.6B when no API keys are set. Automatic local fallback if cloud fails.
- Reranking priority: Jina AI > Cohere. Default local Qwen3-Reranker-0.6B when no API keys are set.
- GPU auto-detection: If GPU is available (CUDA/DirectML) and
llama-cpp-pythonis installed, automatically uses GGUF models (~480MB) instead of ONNX (~570MB) for better performance. - All embeddings stored at 768 dims (default). Switching providers never breaks the vector table.
- Override with
EMBEDDING_BACKEND=localto force local even with API keys.
API_KEYS supports multiple providers in a single string:
API_KEYS=JINA_AI_API_KEY:jina_...,GOOGLE_API_KEY:AIza...,OPENAI_API_KEY:sk-...,COHERE_API_KEY:co-...
LLM Configuration (3-Mode Architecture)
LLM access (for media analysis) supports 3 modes, resolved by priority:
| Priority | Mode | Config | Use case |
|---|---|---|---|
| 1 | Proxy | LITELLM_PROXY_URL + LITELLM_PROXY_KEY |
Production (OCI VM, selfhosted gateway) |
| 2 | SDK | API_KEYS |
Dev/local with direct API access |
| 3 | Local | Nothing needed | Offline, embedding/rerank only (no LLM) |
No cross-mode fallback — if proxy is configured but unreachable, calls fail (no silent fallback to direct API).
SearXNG Configuration (2-Mode)
Web search is powered by SearXNG, a privacy-respecting metasearch engine.
| Mode | Config | Description |
|---|---|---|
| Embedded (default) | WET_AUTO_SEARXNG=true |
Auto-installs and manages SearXNG as subprocess. Zero config needed. |
| External | WET_AUTO_SEARXNG=false + SEARXNG_URL=http://host:port |
Connects to pre-existing SearXNG instance (e.g. Docker container, shared server). |
Embedded mode is best for local development and single-user deployments. On first run, wet-mcp automatically downloads and configures SearXNG.
External mode is recommended when:
- Running in Docker (use a separate SearXNG container)
- Sharing a SearXNG instance across multiple services
- SearXNG is already deployed on your infrastructure
Architecture
┌─────────────────────────────────────────────────────────┐
│ MCP Client │
│ (Claude, Cursor, Windsurf) │
└─────────────────────┬───────────────────────────────────┘
│ MCP Protocol
v
┌─────────────────────────────────────────────────────────┐
│ WET MCP Server │
│ ┌──────────┐ ┌──────────┐ ┌───────┐ ┌────────┐ │
│ │ search │ │ extract │ │ media │ │ config │ │
│ │ (search, │ │(extract, │ │(list, │ │(status,│ │
│ │ research,│ │ crawl, │ │downld,│ │ set, │ │
│ │ docs) │ │ map) │ │analyz)│ │ cache) │ │
│ └──┬───┬───┘ └────┬─────┘ └──┬────┘ └────────┘ │
│ │ │ │ │ + help tool │
│ v v v v │
│ ┌──────┐ ┌──────┐ ┌──────────┐ ┌──────────┐ │
│ │SearX │ │DocsDB│ │ Crawl4AI │ │ Reranker │ │
│ │NG │ │FTS5+ │ │(Playwrgt)│ │(LiteLLM/ │ │
│ │ │ │sqlite│ │ │ │ Qwen3 │ │
│ │ │ │-vec │ │ │ │ local) │ │
│ └──────┘ └──────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ WebCache (SQLite, TTL) │ rclone sync (docs) │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Build from Source
git clone https://github.com/n24q02m/wet-mcp
cd wet-mcp
# Setup (requires mise: https://mise.jdx.dev/)
mise run setup
# Run
uv run wet-mcp
Docker Build
docker build -t n24q02m/wet-mcp:latest .
Requirements: Python 3.13 (not 3.14+)
Compatible With
Also by n24q02m
| Server | Description | Install |
|---|---|---|
| better-notion-mcp | Notion API for AI agents | npx -y @n24q02m/better-notion-mcp@latest |
| mnemo-mcp | Persistent AI memory with hybrid search | uvx mnemo-mcp@latest |
| better-email-mcp | Email (IMAP/SMTP) for AI agents | npx -y @n24q02m/better-email-mcp@latest |
| better-godot-mcp | Godot Engine for AI agents | npx -y @n24q02m/better-godot-mcp@latest |
| better-telegram-mcp | Telegram Bot API + MTProto for AI agents | uvx --python 3.13 better-telegram-mcp@latest |
Related Projects
- modalcom-ai-workers — GPU-accelerated AI workers on Modal.com (embedding, reranking)
- qwen3-embed — Local embedding/reranking library used by wet-mcp
Contributing
See CONTRIBUTING.md
License
MIT - See LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wet_mcp-2.14.1.tar.gz.
File metadata
- Download URL: wet_mcp-2.14.1.tar.gz
- Upload date:
- Size: 116.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b20fd844cec25444841586ff5110695725eda9cd207ed9cca8e5cbe090593f4d
|
|
| MD5 |
7beedd1881cc44f7b983ad807575c241
|
|
| BLAKE2b-256 |
2247e96c17c896596302c59cbe858bd494c3fb6e546d7be9eb85d64bd7cc578c
|
File details
Details for the file wet_mcp-2.14.1-py3-none-any.whl.
File metadata
- Download URL: wet_mcp-2.14.1-py3-none-any.whl
- Upload date:
- Size: 130.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ac18c7f68c6ad1d7a0da1bc8122195b0e8da5cc50e54e475dcd56e0c0e760c88
|
|
| MD5 |
e12dfddf8f9950daa8addf2b2cae8433
|
|
| BLAKE2b-256 |
08d1075146e9b501367fceb6002f7d5bb017dabf796da65f0ebb4c9db8e4ac18
|