Stateful Pipeline MCP server for AI social agents — engage developer communities with minimal token usage
Project description
gwanjong-mcp
Stateful Pipeline MCP server for AI social agents.
Engage developer communities authentically. Comment to connect, post to promote.
Quick Start
# 1. Install
pip install "gwanjong-mcp[all]"
# 2. Configure at least one platform
mkdir -p ~/.gwanjong
cat > ~/.gwanjong/.env << 'EOF'
DEVTO_API_KEY=your_key_here
EOF
# 3. Verify setup
gwanjong-mcp # starts MCP server (use with Claude Code, Cursor, etc.)
With Claude Code:
claude mcp add gwanjong-mcp -- gwanjong-mcp
claude
> "Find interesting MCP discussions and leave a helpful comment"
Autonomous mode (no LLM client needed):
pip install "gwanjong-mcp[all,autonomous]"
gwanjong-daemon --topics "MCP,LLM" --dry-run --max-cycles 1
See .env.example for all configuration options.
Philosophy
Two modes, one goal: building genuine presence in developer communities.
| Mode | Action | Goal |
|---|---|---|
| Comment | Reply to others' posts | Earn reputation through helpful, authentic engagement. No self-promotion. |
| Post | Publish original content | Share your projects, write-ups, and announcements. This is where promotion lives. |
Comments are for giving value — answering questions, sharing insights, joining discussions. The community notices when someone is genuinely helpful. Posts are for showing your work — project launches, technical deep dives, lessons learned.
Why This Exists
Typical MCP servers expose CRUD tools and let the LLM orchestrate everything. Leaving a single comment requires 9+ tool calls, 9+ LLM round trips, and the full tool description list resent every time.
Traditional MCP (14 tools, 9+ round trips):
LLM → list → LLM → trending → LLM → search → LLM → analyze → LLM → get_post
→ LLM → get_comments → LLM → preview → LLM → write → LLM
gwanjong-mcp (5 tools, 3 round trips):
LLM → scout → LLM → draft → LLM (generates content) → strike → done
Design Principles
- Minimal tools — 5 total. Tool descriptions are included in every system prompt, so fewer = cheaper + more accurate.
- Server-side state — Scout results are cached on the server. The LLM doesn't relay data between tools.
- Server as cerebellum — Fetching, filtering, scoring, and analysis happen inside the server. The LLM only handles judgment and content generation.
- Compressed returns — Never dump 20 raw posts. The server scores and returns the top N as summaries.
MCP Tools
| Tool | Role | What happens inside |
|---|---|---|
gwanjong_setup |
Onboarding | Check platform status → guide API key setup → save + test connection |
gwanjong_scout |
Reconnaissance | Trending + search + analyze + score → return top N opportunities |
gwanjong_draft |
Context gathering | Fetch target post + comment tree + tone analysis → return context summary |
gwanjong_strike |
Execution | Post comment/article/cross-post → return result URL |
_status |
Pipeline state | Show state fields + available/blocked tools (auto-generated by mcp-pipeline) |
Pipeline Flow
scout(topic, platforms)
│ stores → opportunities
│ Server internally: fetch trending + search + score + filter
│ Returns: top N scored opportunities (~200 tokens)
▼
draft(opportunity_id)
│ requires → opportunities
│ stores → contexts
│ Server internally: fetch post + comments + analyze tone
│ Returns: context summary + suggested approach (~300 tokens)
▼
LLM generates content based on context
▼
strike(opportunity_id, action, content)
requires → contexts
Server internally: write via platform API + record history
Returns: { url, status }
Example: Commenting (Engagement)
User: "Find interesting MCP discussions and join in"
[1] scout(topic="MCP server", platforms=["devto", "reddit"])
→ Server scans trending + search across platforms → scores → returns top 3
{
"opportunities": [
{"id": "opp_0", "platform": "devto",
"title": "Best MCP servers for productivity?",
"relevance": 0.91, "comments": 42,
"reason": "Active discussion, directly relevant"}
]
}
[2] draft(opportunity_id="opp_0")
→ Server fetches full post + comment tree + tone
{
"title": "Best MCP servers for productivity?",
"body_summary": "...",
"top_comments": ["...", "..."],
"tone": "technical, recommendation-seeking",
"suggested_approach": "Share genuine experience, no self-promo"
}
→ LLM crafts a helpful, authentic reply
[3] strike(opportunity_id="opp_0", action="comment", content="...")
→ {"url": "https://dev.to/.../comment/...", "status": "posted"}
Example: Posting (Promotion)
User: "Write a post about mcp-pipeline on Dev.to"
[1] scout(topic="MCP token optimization", platforms=["devto"])
→ Find what's trending to inform angle and timing
[2] draft(opportunity_id="opp_0")
→ Gather context on existing coverage
[3] LLM writes an original article about the project
[4] strike(opportunity_id="opp_0", action="post", content="...")
→ {"url": "https://dev.to/sonaiengine/...", "status": "posted"}
Supported Platforms
| Platform | Protocol | Auth |
|---|---|---|
| Dev.to | REST API (httpx) | API Key |
| Bluesky | AT Protocol | App Password |
| Twitter/X | OAuth 1.0a (tweepy) | API Key + Token |
| OAuth2 (asyncpraw) | Client ID + Secret |
Only platforms with configured API keys are activated. Others are silently skipped.
Install
# All platforms
pip install "gwanjong-mcp[all]"
# Specific platforms
pip install "gwanjong-mcp[devto]"
pip install "gwanjong-mcp[bluesky]"
pip install "gwanjong-mcp[twitter]"
pip install "gwanjong-mcp[reddit]"
# Development
git clone https://github.com/SonAIengine/gwanjong-mcp.git
cd gwanjong-mcp
pip install -e ".[all,dev]"
Environment Variables
Copy .env.example to .env and fill in the platforms you use:
# Dev.to — https://dev.to/settings/extensions
DEVTO_API_KEY=
# Bluesky — https://bsky.app/settings → App Passwords
BLUESKY_HANDLE=your.handle.bsky.social
BLUESKY_APP_PASSWORD=
# Twitter/X — https://developer.x.com/en/portal/dashboard
TWITTER_API_KEY=
TWITTER_API_SECRET=
TWITTER_ACCESS_TOKEN=
TWITTER_ACCESS_SECRET=
# Reddit — https://www.reddit.com/prefs/apps
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
REDDIT_USERNAME=
REDDIT_PASSWORD=
Claude Code Integration
# Register MCP server
claude mcp add gwanjong-mcp -- gwanjong-mcp
# Use with the gwanjong agent (~/.claude/agents/gwanjong.md)
claude agent gwanjong
> "Find interesting AI agent discussions and leave helpful comments"
> "Write a Dev.to post about mcp-pipeline"
Approval Workflow
Autonomous mode can stop before posting and enqueue generated content for review.
# Queue content instead of posting immediately
gwanjong-daemon --require-approval --max-cycles 1
# Review pending items
gwanjong-approval list
gwanjong-approval show 1
# Approve and execute strike immediately
gwanjong-approval approve 1
# Reject without posting
gwanjong-approval reject 2
If you run the dashboard, pending approvals are also visible and actionable from the UI:
gwanjong-dashboard
# open http://localhost:8585
Architecture
gwanjong-mcp/
├── pyproject.toml
├── run.py # Direct execution entry point
└── gwanjong_mcp/
├── __init__.py
├── __main__.py # python -m gwanjong_mcp
├── server.py # PipelineMCP + 5 tools + GwanjongState
├── setup.py # Platform onboarding (guide/save/test)
└── pipeline.py # scout/draft/strike pipeline logic
Dependency Structure
┌─────────────────────────────────────────────────┐
│ Claude Agent (gwanjong.md) │
│ Persona · Content generation · Final judgment │
└──────────────┬──────────────────────────────────┘
│ 5 tools
┌──────────────▼──────────────────────────────────┐
│ gwanjong-mcp (this project) │
│ scout/draft/strike pipeline logic │
│ │
│ Dependencies: │
│ ├── mcp-pipeline — Stateful MCP framework │
│ ├── devhub — Multi-platform social API │
│ └── graph-tool-call — Content search engine │
└─────────────────────────────────────────────────┘
| Package | Role | Install |
|---|---|---|
| devhub-social | Unified async client for Dev.to, Bluesky, Twitter, Reddit | pip install devhub-social[all] |
| mcp-pipeline | Type-safe state + declarative stores/requires tool chaining |
pip install mcp-pipeline |
| graph-tool-call | BM25 + graph expansion + wRRF content scoring | pip install graph-tool-call |
Development
./.venv/bin/python -m pytest -q # Default test suite (integration 제외)
./.venv/bin/python -m pytest -m integration -q # Playwright/network integration tests
./.venv/bin/python -m mypy gwanjong_mcp/ # Type check
./.venv/bin/python -m ruff check gwanjong_mcp/ # Lint
./.venv/bin/python run.py # Local server
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gwanjong_mcp-0.3.0.tar.gz.
File metadata
- Download URL: gwanjong_mcp-0.3.0.tar.gz
- Upload date:
- Size: 63.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3801f9ccf7ee9a0f4a5f3caf20068a60dd35b8f6b2e7850f9c41d35cf4cd816
|
|
| MD5 |
ca9f2cbf4e552541eefa2e9ccdaff262
|
|
| BLAKE2b-256 |
dcdc3ab3fd40e701baac87a2cd235e04880d2b5e04608d72d3505ca119a9bbd1
|
File details
Details for the file gwanjong_mcp-0.3.0-py3-none-any.whl.
File metadata
- Download URL: gwanjong_mcp-0.3.0-py3-none-any.whl
- Upload date:
- Size: 57.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ac8ed80bc317c9a32b8ac088c68220f0d91a15e7c462842c33a0b08634bc4d1
|
|
| MD5 |
109daff855bfec46e431ad0f9efacc0b
|
|
| BLAKE2b-256 |
21d51cc61e2b16af9b7c3c4269e2a071d4bebcfbe82c9905c69fed426e90e47e
|