MCP server for integrating Macrocosmos SN13 social media data into Claude Desktop and Cursor
Project description
Macrocosmos MCP
Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop, Cursor, Windsurf, OpenAI Agents and others to fetch real-time social media data.
Quickstart with Claude Desktop
- Get your API key from Macrocosmos. There is a free tier with $5 of credits to start.
- Install
uv(Python package manager), install withcurl -LsSf https://astral.sh/uv/install.sh | shor see theuvrepo for additional install methods. - Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json to include the following:
{
"mcpServers": {
"macrocosmos": {
"command": "uvx",
"args": ["macrocosmos-mcp"],
"env": {
"MC_API": "<insert-your-api-key-here>"
}
}
}
}
Available Tools
1. query_on_demand_data - Real-time Social Media Queries
Fetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
source |
string | REQUIRED. Platform: 'X' or 'REDDIT' (case-sensitive) |
usernames |
list | Up to 5 usernames. For X: @ is optional. Not available for Reddit |
keywords |
list | Up to 5 keywords. For Reddit: first item is subreddit (e.g., 'r/MachineLearning') |
start_date |
string | ISO format (e.g., '2024-01-01T00:00:00Z'). Defaults to 24h ago |
end_date |
string | ISO format. Defaults to now |
limit |
int | Max results 1-1000. Default: 10 |
keyword_mode |
string | 'any' (default) or 'all' |
Example prompts:
- "What has @elonmusk been posting about today?"
- "Get me the latest posts from r/bittensor about dTAO"
- "Fetch 50 tweets about #AI from the last week"
2. create_gravity_task - Large-Scale Data Collection
Create a Gravity task for collecting large datasets over 7 days. Use this when you need more than 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
tasks |
list | REQUIRED. List of task objects (see below) |
name |
string | Optional name for the task |
email |
string | Email for notification when complete |
Task object structure:
{
"platform": "x", // 'x' or 'reddit'
"topic": "#Bittensor", // For X: MUST start with '#' or '$'
"keyword": "dTAO" // Optional: filter within topic
}
Important: For X (Twitter), topics MUST start with # or $ (e.g., #ai, $BTC). Plain keywords are rejected.
Example prompts:
- "Create a gravity task to collect #Bittensor tweets for the next 7 days"
- "Start collecting data from r/MachineLearning about neural networks"
3. get_gravity_task_status - Check Collection Progress
Monitor your Gravity task and see how much data has been collected.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID from create_gravity_task |
include_crawlers |
bool | Include detailed stats. Default: True |
Returns: Task status, crawler IDs, records_collected, bytes_collected
Example prompts:
- "Check the status of my Bittensor data collection task"
- "How many records have been collected so far?"
4. build_dataset - Build & Download Dataset
Build a dataset from collected data before the 7-day completion.
Warning: This will STOP the crawler and de-register it from the network.
Parameters:
| Parameter | Type | Description |
|---|---|---|
crawler_id |
string | REQUIRED. Get from get_gravity_task_status |
max_rows |
int | Max rows to include. Default: 10000 |
email |
string | Email for notification when ready |
Example prompts:
- "Build a dataset from my Bittensor crawler with 5000 rows"
- "I have enough data, build the dataset now"
5. get_dataset_status - Check Build Progress & Download
Check dataset build progress and get download links when ready.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID from build_dataset |
Returns: Build status (10 steps), and when complete: download URLs for Parquet files
Example prompts:
- "Is my dataset ready to download?"
- "Get the download link for my Bittensor dataset"
6. cancel_gravity_task - Stop Data Collection
Cancel a running Gravity task.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID to cancel |
7. cancel_dataset - Cancel Build or Purge Dataset
Cancel a dataset build or purge a completed dataset.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID to cancel/purge |
Example Workflows
Quick Query (On-Demand)
User: "What's the sentiment about $TAO on Twitter today?"
→ Uses query_on_demand_data to fetch recent tweets
→ Returns up to 1000 results instantly
Large Dataset Collection (Gravity)
User: "I need to collect a week's worth of #AI tweets for analysis"
1. create_gravity_task → Returns gravity_task_id
2. get_gravity_task_status → Monitor progress, get crawler_ids
3. build_dataset → When ready, build the dataset
4. get_dataset_status → Get download URL for Parquet file
Example Prompts
On-Demand Queries
- "What has the president of the U.S. been saying over the past week on X?"
- "Fetch me information about what people are posting on r/politics today."
- "Please analyze posts from @elonmusk for the last week."
- "Get me 100 tweets about #Bittensor and analyze the sentiment"
Large-Scale Collection
- "Create a gravity task to collect data about #AI from Twitter and r/MachineLearning from Reddit"
- "Start a 7-day collection of $BTC tweets with keyword 'ETF'"
- "Check how many records my gravity task has collected"
- "Build a dataset with 10,000 rows from my crawler"
MIT License Made with love by the Macrocosmos team
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file macrocosmos_mcp-0.2.1.tar.gz.
File metadata
- Download URL: macrocosmos_mcp-0.2.1.tar.gz
- Upload date:
- Size: 11.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30976740edb86f17efc9ab64beed1959712f37a554f646d771c36a3b707fc79b
|
|
| MD5 |
bd0c52fe75ad1c433ca0ec44770515bd
|
|
| BLAKE2b-256 |
633a11961972d169f25a46c870e85261a90c8a785ee118113f566513e3aa0173
|
File details
Details for the file macrocosmos_mcp-0.2.1-py3-none-any.whl.
File metadata
- Download URL: macrocosmos_mcp-0.2.1-py3-none-any.whl
- Upload date:
- Size: 13.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b34972a4627b477aac6a82831d813d79ce446ece603e0585daa388fa24053404
|
|
| MD5 |
418b9a40a64a456869d5dcf33692b31e
|
|
| BLAKE2b-256 |
927c6803f7cb12c436d5ffa284905d047c1077415baf785532158577a15cb8fa
|