FastMCP server template with mcp-refcache and Langfuse tracing
Project description
YouTube MCP Server
A production-ready MCP (Model Context Protocol) server for YouTube integration with intelligent caching via mcp-refcache. Search videos, retrieve transcripts, analyze channels, monitor live streams, and more - all optimized for AI agents with smart caching to minimize API quota usage.
Version: 0.0.0 (Experimental First Release)
Features
๐ Search & Discovery
- Video Search - Find videos by keywords with metadata (title, description, views, etc.)
- Channel Search - Discover channels by query
- Live Stream Search - Find currently broadcasting live videos
๐ Metadata & Analytics
- Video Details - Complete metadata including statistics (views, likes, comments)
- Channel Info - Detailed channel statistics and subscriber counts
- Live Status - Check if a video is currently streaming with viewer counts
๐ Transcript Management
- Full Transcripts - Download complete video transcripts with timestamps
- Transcript Previews - Get summarized transcript snippets for quick context
- Chunked Access - Navigate large transcripts in manageable pieces
- Multi-Language Support - List and retrieve transcripts in available languages
๐ฌ Engagement & Live Chat
- Video Comments - Fetch top comments with engagement metrics
- Live Chat Monitoring - Real-time access to live stream chat messages
- Live Chat Pagination - Efficient polling for new chat messages
โก Performance & Caching
- Intelligent Multi-Tier Caching - Optimized for different data volatility:
youtube.content- Permanent caching for immutable content (transcripts)youtube.api- 24h cache for general API data (video/channel metadata)youtube.comments- 5m cache for rapidly changing comment datayoutube.search- 6h cache for search results- Live streaming - 30s-5m cache for real-time data
- Reference-Based Results - Large datasets returned as references to minimize context usage
- Preview Generation - Automatic previews for transcript and large data
- Smart Quota Management - Caching reduces API quota usage by ~75%
Prerequisites
- Python 3.12+
- uv (recommended) or pip
- YouTube Data API v3 Key - Get one here
Getting Your YouTube API Key
- Go to Google Cloud Console
- Create a new project or select an existing one
- Enable the YouTube Data API v3:
- Navigate to "APIs & Services" > "Library"
- Search for "YouTube Data API v3"
- Click "Enable"
- Create credentials:
- Go to "APIs & Services" > "Credentials"
- Click "Create Credentials" > "API Key"
- Copy your API key
- (Optional) Restrict your API key:
- Click on the key to edit
- Under "API restrictions", select "Restrict key"
- Choose "YouTube Data API v3"
- Save
Default Quota: 10,000 units/day (~100 searches or ~10,000 metadata requests)
Quick Start
Installation (Local)
# Clone the repository
git clone https://github.com/l4b4r4b4b4/yt-mcp
cd yt-mcp
# Install dependencies
uv sync
# Set your API key
export YOUTUBE_API_KEY="your-api-key-here"
# Run the server (stdio mode for Claude Desktop)
uv run yt-mcp stdio
Installation (Docker)
# Clone the repository
git clone https://github.com/l4b4r4b4b4/yt-mcp
cd yt-mcp
# Set your API key in .env file
echo "YOUTUBE_API_KEY=your-api-key-here" > .env
# Build and run with docker-compose
docker compose up
The server will be available at http://localhost:8000 in HTTP mode.
Configuration
Environment Variables
Set your YouTube API key via environment variable:
export YOUTUBE_API_KEY="your-youtube-api-key"
Or add to your shell profile (~/.zshrc, ~/.bashrc):
echo 'export YOUTUBE_API_KEY="your-key"' >> ~/.zshrc
Optional Langfuse Tracing:
export LANGFUSE_PUBLIC_KEY="pk-lf-..."
export LANGFUSE_SECRET_KEY="sk-lf-..."
export LANGFUSE_HOST="https://cloud.langfuse.com"
Using with Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"youtube": {
"command": "uv",
"args": ["--directory", "/path/to/yt-mcp", "run", "yt-mcp", "stdio"],
"env": {
"YOUTUBE_API_KEY": "your-api-key-here"
}
}
}
}
Using with Zed
Add to your Zed settings (.zed/settings.json or global settings):
{
"context_servers": {
"youtube-mcp": {
"command": {
"path": "uv",
"args": ["--directory", "/path/to/yt-mcp", "run", "yt-mcp", "stdio"],
"env": {
"YOUTUBE_API_KEY": "your-api-key-here"
}
}
}
}
}
Using with Docker
Production (docker-compose)
# Create .env file with your API key
echo "YOUTUBE_API_KEY=your-api-key" > .env
# Run production server
docker compose up
# Run in background
docker compose up -d
# View logs
docker compose logs -f
# Stop server
docker compose down
Development (with hot reload)
# Run development server with code volume mount
docker compose --profile dev up
Direct Docker Run
# Build the image
docker build -f docker/Dockerfile -t yt-mcp:latest .
# Run the container
docker run -p 8000:8000 \
-e YOUTUBE_API_KEY="your-api-key" \
yt-mcp:latest
# With Langfuse tracing
docker run -p 8000:8000 \
-e YOUTUBE_API_KEY="your-api-key" \
-e LANGFUSE_PUBLIC_KEY="pk-lf-..." \
-e LANGFUSE_SECRET_KEY="sk-lf-..." \
yt-mcp:latest
Available Tools
๐ Search Tools
search_videos(query: str, max_results: int = 5)
Search for YouTube videos matching a query.
Parameters:
query(string, required) - Search term (e.g., "NixOS tutorials", "vimjoyer nix")max_results(integer, optional) - Number of results, 1-50, default: 5
Returns:
[
{
"video_id": "abc123",
"title": "Video Title",
"description": "Video description...",
"url": "https://www.youtube.com/watch?v=abc123",
"thumbnail": "https://i.ytimg.com/vi/abc123/default.jpg",
"channel_title": "Channel Name",
"published_at": "2024-01-15T10:30:00Z"
}
]
Caching: 6 hours (youtube.search namespace) Quota Cost: 100 units per request
Example:
Search for videos about "Nix flakes tutorial"
search_channels(query: str, max_results: int = 5)
Search for YouTube channels matching a query.
Parameters:
query(string, required) - Channel search termmax_results(integer, optional) - Number of results, 1-50, default: 5
Returns:
[
{
"channel_id": "UCxyz123",
"title": "Channel Name",
"description": "Channel description...",
"url": "https://www.youtube.com/channel/UCxyz123",
"thumbnail": "https://yt3.ggpht.com/...",
"published_at": "2020-05-10T08:00:00Z"
}
]
Caching: 6 hours (youtube.search namespace) Quota Cost: 100 units per request
search_live_videos(query: str, max_results: int = 5)
Search for currently live YouTube videos.
Parameters:
query(string, required) - Search query (e.g., "gaming live", "news live")max_results(integer, optional) - Number of results, 1-50, default: 5
Returns:
[
{
"video_id": "live123",
"title": "Live Stream Title",
"description": "Stream description...",
"url": "https://www.youtube.com/watch?v=live123",
"thumbnail": "https://i.ytimg.com/vi/live123/default.jpg",
"channel_title": "Streamer Name",
"published_at": "2024-01-20T15:00:00Z"
}
]
Caching: 6 hours (youtube.search namespace) Quota Cost: 100 units per request
๐ Metadata & Status Tools
get_video_details(video_id: str)
Get detailed information about a specific video.
Parameters:
video_id(string, required) - YouTube video ID (e.g., "dQw4w9WgXcQ")
Returns:
{
"video_id": "abc123",
"title": "Video Title",
"description": "Full description...",
"url": "https://www.youtube.com/watch?v=abc123",
"thumbnail": "https://i.ytimg.com/vi/abc123/maxresdefault.jpg",
"channel_title": "Channel Name",
"published_at": "2024-01-15T10:30:00Z",
"view_count": "150000",
"like_count": "5000",
"comment_count": "300",
"duration": "PT15M30S",
"tags": ["nix", "linux", "tutorial"]
}
Caching: 24 hours (youtube.api namespace) Quota Cost: 1 unit per request
get_channel_info(channel_id: str)
Get detailed information about a YouTube channel.
Parameters:
channel_id(string, required) - YouTube channel ID (e.g., "UCuAXFkgsw1L7xaCfnd5JJOw")
Returns:
{
"channel_id": "UCxyz123",
"title": "Channel Name",
"description": "Channel description...",
"url": "https://www.youtube.com/channel/UCxyz123",
"thumbnail": "https://yt3.ggpht.com/...",
"subscriber_count": "50000",
"video_count": "200",
"view_count": "5000000",
"published_at": "2020-05-10T08:00:00Z"
}
Caching: 24 hours (youtube.api namespace) Quota Cost: 1 unit per request
is_live(video_id: str)
Check if a YouTube video is currently live streaming.
Parameters:
video_id(string, required) - YouTube video ID to check
Returns:
{
"video_id": "live123",
"is_live": true,
"viewer_count": 1234,
"scheduled_start_time": "2024-01-20T15:00:00Z",
"actual_start_time": "2024-01-20T15:02:00Z",
"active_live_chat_id": "Cg0KC2xpdmUxMjM..."
}
Caching: 30 seconds (youtube.api namespace) Quota Cost: 1 unit per request
Note: Use this to check status before accessing live chat.
๐ Transcript Tools
list_available_transcripts(video_id: str)
List all available transcript languages for a video.
Parameters:
video_id(string, required) - YouTube video ID
Returns:
{
"video_id": "abc123",
"available_languages": ["en", "es", "fr", "de"],
"transcript_info": [
{
"language": "en",
"language_code": "en",
"is_generated": false,
"is_translatable": true
},
{
"language": "es",
"language_code": "es",
"is_generated": true,
"is_translatable": false
}
]
}
Caching: Permanent (youtube.content namespace) Quota Cost: 0 (uses youtube-transcript-api, not YouTube Data API)
Note: Always check this first before requesting transcripts.
get_video_transcript_preview(video_id: str, language: str = "en", max_chars: int = 2000)
Get a preview of the video transcript (first N characters).
Parameters:
video_id(string, required) - YouTube video IDlanguage(string, optional) - Language code (default: "en")max_chars(integer, optional) - Maximum characters to return (default: 2000)
Returns:
{
"video_id": "abc123",
"language": "en",
"preview": "First 2000 characters of transcript...",
"total_length": 50000,
"is_truncated": true
}
Caching: Permanent (youtube.content namespace) Quota Cost: 0
Note: Use this for quick context before fetching full transcript.
get_full_transcript(video_id: str, language: str = "en")
Get the complete transcript of a video with timestamps.
Parameters:
video_id(string, required) - YouTube video IDlanguage(string, optional) - Language code (default: "en")
Returns:
{
"video_id": "abc123",
"language": "en",
"transcript": [
{
"text": "Hello everyone, welcome to this tutorial...",
"start": 0.0,
"duration": 3.5
},
{
"text": "Today we're going to learn about...",
"start": 3.5,
"duration": 4.2
}
],
"full_text": "Hello everyone, welcome to this tutorial. Today we're going to learn about..."
}
Caching: Permanent (youtube.content namespace) Quota Cost: 0
Note: Large transcripts return a RefCache reference. Use get_cached_result to paginate or retrieve full data.
get_transcript_chunk(video_id: str, start_index: int = 0, chunk_size: int = 50, language: str = "en")
Get a specific chunk of transcript entries (for pagination).
Parameters:
video_id(string, required) - YouTube video IDstart_index(integer, optional) - Starting entry index, 0-based (default: 0)chunk_size(integer, optional) - Number of entries to return (default: 50)language(string, optional) - Language code (default: "en")
Returns:
{
"video_id": "abc123",
"language": "en",
"start_index": 0,
"chunk_size": 50,
"entries": [
{"text": "...", "start": 0.0, "duration": 3.5}
],
"total_entries": 250,
"has_more": true
}
Caching: Permanent (youtube.content namespace) Quota Cost: 0
๐ฌ Engagement & Live Chat Tools
get_video_comments(video_id: str, max_results: int = 20)
Get top comments from a video with engagement metrics.
Parameters:
video_id(string, required) - YouTube video IDmax_results(integer, optional) - Number of comments, 1-100 (default: 20)
Returns:
{
"video_id": "abc123",
"comments": [
{
"author": "Username",
"text": "Great video! This really helped me understand...",
"like_count": 42,
"published_at": "2024-01-20T15:30:00Z",
"reply_count": 3
}
],
"total_returned": 20
}
Caching: 5 minutes (youtube.comments namespace) Quota Cost: 1 unit per request
Note: Returns empty list if comments are disabled (not an error). Only top-level comments, no replies.
get_live_chat_id(video_id: str)
Get the live chat ID for a currently streaming video.
Parameters:
video_id(string, required) - YouTube video ID of the live stream
Returns:
{
"video_id": "live123",
"live_chat_id": "Cg0KC2xpdmUxMjM...",
"is_live": true
}
Caching: 5 minutes (youtube.api namespace) Quota Cost: 1 unit per request
Note: Use is_live first to verify video is streaming. Chat ID remains constant during stream.
get_live_chat_messages(video_id: str, max_results: int = 200, page_token: str | None = None)
Get recent live chat messages from a streaming video with pagination.
Parameters:
video_id(string, required) - YouTube video ID of the live streammax_results(integer, optional) - Maximum messages to return, 1-2000 (default: 200)page_token(string, optional) - Pagination token from previous call (None for first call)
Returns:
{
"video_id": "live123",
"messages": [
{
"author": "ViewerName",
"text": "Great stream!",
"published_at": "2024-01-20T16:45:30Z",
"author_channel_id": "UCxyz..."
}
],
"total_returned": 50,
"next_page_token": "GgkKBxIFMTIzNDU",
"polling_interval_millis": 30000
}
Caching: 30 seconds (youtube.comments namespace) Quota Cost: 1 unit per request
Polling Pattern:
- First call: No
page_tokenโ Get latest messages +next_page_token - Store
next_page_token - Wait 30-60 seconds (respect
polling_interval_millis) - Subsequent calls: Pass
page_tokenโ Get only NEW messages - Repeat steps 2-4 for continuous monitoring
Note: MCP is request/response (not true streaming). Agent must manually poll this tool repeatedly to see new messages.
๐๏ธ Cache Management Tools
get_cached_result(ref_id: str, page: int | None = None, page_size: int | None = None, max_size: int | None = None)
Retrieve and paginate through cached results.
Parameters:
ref_id(string, required) - Reference ID from cached tool (e.g., from large transcript)page(integer, optional) - Page number, 1-indexedpage_size(integer, optional) - Items per page, 1-100max_size(integer, optional) - Maximum preview size in tokens
Returns:
{
"ref_id": "youtube.content:transcript_abc123_en",
"preview": [...],
"total_items": 250,
"page": 2,
"total_pages": 5
}
Note: Use this when a tool returns a ref_id instead of full data (for large results).
Example Use Cases
Finding a Specific Video
Goal: Find Vimjoyer's video about Nix garbage collection that keeps only the last N generations
Workflow:
1. Search: "Search for videos by Vimjoyer about Nix garbage collection generations"
โ Returns list of videos with IDs
2. Preview: "Get transcript preview for video abc123"
โ Returns first 2000 characters to check relevance
3. Analyze: "Get full transcript for video abc123 and find the section about keeping last N generations"
โ Returns complete transcript with timestamps
4. Extract: AI analyzes transcript and returns relevant section with timestamp
Channel Analysis
Goal: Analyze a channel's recent content and engagement
Workflow:
1. Search: "Find the NixOS channel"
โ Returns channel ID
2. Info: "Get channel info for UC[channel-id]"
โ Returns subscriber count, video count, total views
3. Videos: "Search for recent videos from NixOS channel"
โ Returns latest video list
4. Engagement: "Get comments for video abc123"
โ Returns top comments with like counts
Live Stream Monitoring
Goal: Monitor a live stream and track chat activity
Workflow:
1. Find: "Search for live videos about Python programming"
โ Returns currently live streams
2. Check: "Is video live123 currently streaming?"
โ Confirms live status and viewer count
3. Connect: "Get live chat ID for video live123"
โ Returns chat ID needed for messages
4. Monitor: "Get live chat messages for video live123"
โ Returns recent messages + next_page_token
5. Poll: "Get live chat messages for video live123 with page_token=XYZ"
โ Returns only new messages since last call
6. Repeat: Wait 30-60 seconds, then repeat step 5
Transcript Analysis Across Languages
Goal: Find and compare transcripts in multiple languages
Workflow:
1. Search: "Search for videos about 'machine learning basics'"
โ Returns video IDs
2. Check: "List available transcripts for video abc123"
โ Returns ["en", "es", "fr", "de", "auto-generated"]
3. Compare: "Get transcript preview for abc123 in English"
โ Preview English version
4. Compare: "Get transcript preview for abc123 in Spanish"
โ Preview Spanish version
5. Analyze: AI compares content across languages
Caching Strategy
The server uses a 4-tier caching architecture optimized for different data volatility levels:
Tier 1: Search Results (6 hours)
- Namespace:
youtube.search - TTL: 6 hours (21,600 seconds)
- Size: 300 entries
- Use: Video search, channel search, live video search
- Rationale: Search rankings change throughout the day; 6h balances freshness with quota savings
Tier 2: API Metadata (24 hours)
- Namespace:
youtube.api - TTL: 24 hours (86,400 seconds)
- Size: 1000 entries
- Use: Video details, channel info
- Rationale: Video stats change daily but not hourly; 24h cache reduces quota by 24x
Tier 3: Comments & Engagement (5 minutes)
- Namespace:
youtube.comments - TTL: 5 minutes (300 seconds)
- Size: 500 entries
- Use: Video comments
- Rationale: Comments can change rapidly on viral videos; 5m balances real-time with quota
Tier 4: Immutable Content (Permanent)
- Namespace:
youtube.content - TTL: Permanent (no expiration)
- Size: 5000 entries
- Use: Video transcripts (all transcript tools)
- Rationale: Transcripts never change once published; permanent cache eliminates redundant fetches
Tier 5: Live Streaming (30 seconds - 5 minutes)
- Namespaces:
youtube.api(live status),youtube.comments(chat messages) - TTL: 30 seconds (live status/chat), 5 minutes (chat ID)
- Use: Live stream status, chat messages, chat IDs
- Rationale: Real-time data needs frequent updates but excessive polling wastes quota
RefCache Integration
Large results (transcripts, long comment lists) are automatically handled by RefCache:
- Small Results (โค2048 tokens): Returned inline directly to agent
- Large Results (>2048 tokens): Cached with
ref_id+ preview returned - Pagination: Use
get_cached_result(ref_id, page=N)to access specific pages - Sample Previews: Large lists show representative samples in preview
Benefits:
- Minimizes context window pollution for agents
- Enables efficient pagination without re-fetching
- Preserves full data for detailed analysis when needed
API Quota Management
Understanding Quotas
YouTube Data API v3 has daily quotas measured in "units":
- Default Quota: 10,000 units/day (free tier)
- Search Operation: 100 units each
- Metadata Operation: 1 unit each (video details, channel info, comments)
- Live Chat Messages: 1 unit per request
- Transcript Operations: 0 units (uses youtube-transcript-api, not YouTube Data API)
Quota Calculation Examples
Without Caching:
- 100 video searches = 10,000 units = entire daily quota
- 10,000 video detail requests = 10,000 units = entire daily quota
With Caching (6h TTL for search, 24h for metadata):
- Same 100 searches (6h cache) = 400 units/day (~75% savings)
- Same 10,000 metadata requests (24h cache) = ~420 units/day (~96% savings)
Best Practices
- Use transcript tools first - They cost 0 quota
- Search broadly, then get details - Search costs 100x more than metadata
- Cache effectively - Let the built-in caching do its job
- Batch operations - Group related requests in single session
- Monitor usage - Server returns quota errors with clear messages
Increasing Quota
If you need higher quota:
- Go to Google Cloud Console
- Navigate to your project โ APIs & Services โ YouTube Data API v3
- Click "Quotas" tab
- Request quota increase (requires billing account, but API is still free)
- Typical increases: 50,000 to 1,000,000 units/day
Docker Details
Image Sizes
-
Base Image: 290MB (
ghcr.io/l4b4r4b4b4/fastmcp-base:latest)- Python 3.12-slim + uv + dependencies
- Shared across all FastMCP projects
-
Production Image: 229MB (
ghcr.io/l4b4r4b4b4/yt-mcp:latest)- Base + application code
- Optimized for size and startup speed
Container Features
- Non-root user: Runs as
appuserfor security - Health checks: Built-in health endpoint at
/health - Environment config: All settings via environment variables
- Multi-arch: Supports amd64 and arm64 (M1/M2 Macs)
- Streamable HTTP: Uses HTTP transport (recommended for Docker/remote)
Docker Compose Configuration
The docker-compose.yml includes three profiles:
-
Production (default):
docker compose up- Port 8000 exposed
- Optimized production image
- Auto-restart on failure
-
Development:
docker compose --profile dev up- Port 8000 exposed
- Volume mount for hot reload
- Development dependencies included
-
Build:
docker compose --profile build up base- Builds base image for publishing
- Only used for releases
Troubleshooting
"Invalid API Key" Error
Symptoms:
Error: API key not valid. Please pass a valid API key.
Solutions:
- Verify key is set:
echo $YOUTUBE_API_KEY - Check for typos or extra spaces in key
- Verify key has YouTube Data API v3 enabled in Google Cloud Console
- Make sure key restrictions (if any) allow YouTube Data API v3
"Quota Exceeded" Error
Symptoms:
Error: The request cannot be completed because you have exceeded your quota.
Solutions:
- Wait until quota resets (midnight Pacific Time)
- Enable billing in Google Cloud Console for higher quota
- Use caching effectively (it's automatic, but check
get_cached_resultfor large operations) - Use transcript tools (0 quota cost) instead of search when possible
- Request quota increase from Google Cloud Console
"No Transcript Available" Error
Symptoms:
Error: No transcript found for this video
Solutions:
- Use
list_available_transcriptsfirst to check availability - Try auto-generated transcripts: often available even without manual captions
- Some videos genuinely don't have transcripts (creator didn't enable)
- Check if video is age-restricted or private
"Comments Disabled" (Empty Result)
Symptoms:
{"video_id": "abc123", "comments": [], "total_returned": 0}
This is NOT an error - the video has comments disabled by the creator. The tool returns an empty list as expected behavior.
Docker: "Cannot connect to server"
Symptoms:
Error: Failed to connect to localhost:8000
Solutions:
- Verify container is running:
docker compose ps - Check container logs:
docker compose logs -f - Ensure port 8000 is not in use:
lsof -i :8000(macOS/Linux) - Verify API key is set in
.envfile or docker-compose environment - Check health:
curl http://localhost:8000/health
Docker: "Rate limiting" or slow responses
Symptoms:
- Slow API responses
- Timeout errors
Solutions:
- YouTube API has rate limits - this is normal behavior
- Caching will improve performance after first requests
- For local development, use stdio mode instead of HTTP:
uv run yt-mcp stdio - Check your network connection
- Verify Docker has sufficient resources (memory, CPU)
Development
Setup Development Environment
# Using Nix (recommended)
nix develop
# Or install dependencies manually with uv
uv sync
Running Tests
# Run all tests
uv run pytest
# With coverage report
uv run pytest --cov
# Run specific test file
uv run pytest tests/test_server.py
# Watch mode (requires pytest-watch)
uv run ptw
Current Test Status: 178 tests passing, 76% code coverage
Linting and Formatting
# Check and fix linting issues
uv run ruff check . --fix
# Format code
uv run ruff format .
# Type checking
uv run mypy app
Project Structure
yt-mcp/
โโโ app/
โ โโโ __init__.py
โ โโโ __main__.py # CLI entry point
โ โโโ server.py # Main MCP server with all tools
โ โโโ tools/
โ โ โโโ __init__.py
โ โ โโโ youtube.py # YouTube API integration
โ โ โโโ ... # Other tool modules
โ โโโ tracing.py # Langfuse tracing integration
โ โโโ prompts.py # MCP prompts
โโโ tests/
โ โโโ conftest.py # Pytest configuration
โ โโโ test_server.py # Server tests
โ โโโ test_youtube.py # YouTube tool tests
โโโ docker/
โ โโโ Dockerfile # Production image
โ โโโ Dockerfile.base # Base image with dependencies
โ โโโ Dockerfile.dev # Development image
โโโ .agent/ # Development notes and planning
โโโ pyproject.toml # Dependencies and configuration
โโโ docker-compose.yml # Container orchestration
โโโ flake.nix # Nix development environment
โโโ README.md # This file
Version 0.0.0 Release Notes
This is the first experimental release of the YouTube MCP server. It's published to test both the implementation and the release workflow.
What Works
- โ All 16 YouTube tools implemented and tested
- โ Comprehensive test suite (178 tests, 76% coverage)
- โ Multi-tier caching with RefCache integration
- โ Docker support (production + development)
- โ Langfuse tracing for observability
- โ Claude Desktop and Zed integration
Known Limitations
- This is version 0.0.0 - expect issues
- Limited real-world validation (this tests the release process)
- Documentation may have gaps or inaccuracies
- Docker images published but not battle-tested
Next Steps
- 0.0.1: Bug fixes and improvements from 0.0.0 feedback
- 0.0.x: Continued iteration and refinement
- 0.1.0: After 5-10 patch releases and proven stability
- 1.0.0: Production-ready after 6+ months of 0.x usage
We encourage feedback! Open issues on GitHub with any problems or suggestions.
Environment Variables Reference
| Variable | Description | Required | Default |
|---|---|---|---|
YOUTUBE_API_KEY |
YouTube Data API v3 key | Yes | None |
LANGFUSE_PUBLIC_KEY |
Langfuse tracing public key | No | None |
LANGFUSE_SECRET_KEY |
Langfuse tracing secret key | No | None |
LANGFUSE_HOST |
Langfuse host URL | No | https://cloud.langfuse.com |
FASTMCP_PORT |
Server port (HTTP mode) | No | 8000 |
FASTMCP_HOST |
Server host (HTTP mode) | No | 0.0.0.0 |
Contributing
See CONTRIBUTING.md for development guidelines and how to submit pull requests.
License
MIT License - see LICENSE for details.
Related Projects
- mcp-refcache - Reference-based caching for MCP servers
- FastMCP - High-performance MCP server framework
- Model Context Protocol - Official MCP specification
- YouTube Data API v3 - YouTube API documentation
- youtube-transcript-api - Transcript library
Acknowledgments
- Built on FastMCP and mcp-refcache libraries
- Uses Google's YouTube Data API v3
- Uses youtube-transcript-api for quota-free transcript access
- Langfuse for observability and tracing
- Docker for containerization
Questions or Issues? Open an issue on GitHub
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file yt_api_mcp-0.0.4.tar.gz.
File metadata
- Download URL: yt_api_mcp-0.0.4.tar.gz
- Upload date:
- Size: 403.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e882c979e3dc9663cae0fc76cf626763a8283912f9064947fe86499ca49fc04
|
|
| MD5 |
026b8eec23e3f67eeb9eb847014f8d93
|
|
| BLAKE2b-256 |
c40231dea15d354ad2bff8e66a89b3fff9f8a095121946a4b7f6860267084cea
|
Provenance
The following attestation bundles were made for yt_api_mcp-0.0.4.tar.gz:
Publisher:
publish.yml on l4b4r4b4b4/yt-api-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yt_api_mcp-0.0.4.tar.gz -
Subject digest:
8e882c979e3dc9663cae0fc76cf626763a8283912f9064947fe86499ca49fc04 - Sigstore transparency entry: 1213672627
- Sigstore integration time:
-
Permalink:
l4b4r4b4b4/yt-api-mcp@4bd141d55366509862ca5173e1f97b75a2b52b62 -
Branch / Tag:
refs/tags/v0.0.4 - Owner: https://github.com/l4b4r4b4b4
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4bd141d55366509862ca5173e1f97b75a2b52b62 -
Trigger Event:
release
-
Statement type:
File details
Details for the file yt_api_mcp-0.0.4-py3-none-any.whl.
File metadata
- Download URL: yt_api_mcp-0.0.4-py3-none-any.whl
- Upload date:
- Size: 87.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
18a5c5bfc2c960351c93f38af815159076def853119fc11b556972116e7e9167
|
|
| MD5 |
34c9f44e9f4afa017628f20675db6cd6
|
|
| BLAKE2b-256 |
97fb93b13db6e5b4fab752668ddfdab4d063efc66f6bb12cd1c388f858fbdf1e
|
Provenance
The following attestation bundles were made for yt_api_mcp-0.0.4-py3-none-any.whl:
Publisher:
publish.yml on l4b4r4b4b4/yt-api-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yt_api_mcp-0.0.4-py3-none-any.whl -
Subject digest:
18a5c5bfc2c960351c93f38af815159076def853119fc11b556972116e7e9167 - Sigstore transparency entry: 1213672700
- Sigstore integration time:
-
Permalink:
l4b4r4b4b4/yt-api-mcp@4bd141d55366509862ca5173e1f97b75a2b52b62 -
Branch / Tag:
refs/tags/v0.0.4 - Owner: https://github.com/l4b4r4b4b4
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4bd141d55366509862ca5173e1f97b75a2b52b62 -
Trigger Event:
release
-
Statement type: