Skip to main content

An MCP server integrating with OpenAI TTS (text-to-speech) API

Project description

openai-tts-mcp-server MCP server

An MCP server integrating with OpenAI TTS (text-to-speech) API

Components

Resources

The server implements a simple note storage system with:

  • Custom note:// URI scheme for accessing individual notes
  • Each note resource has a name, description and text/plain mimetype

Prompts

The server provides a single prompt:

  • summarize-notes: Creates summaries of all stored notes
    • Optional "style" argument to control detail level (brief/detailed)
    • Generates prompt combining all current notes with style preference

Tools

The server implements one tool:

  • add-note: Adds a new note to the server
    • Takes "name" and "content" as required string arguments
    • Updates server state and notifies clients of resource changes

Configuration

[TODO: Add configuration details specific to your implementation]

Quickstart

Install

Claude Desktop

On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json On Windows: %APPDATA%/Claude/claude_desktop_config.json

Development/Unpublished Servers Configuration ``` "mcpServers": { "openai-tts-mcp-server": { "command": "uv", "args": [ "--directory", "/Users/tomek/workspace/openai-tts-mcp-server", "run", "openai-tts-mcp-server" ] } } ```
Published Servers Configuration ``` "mcpServers": { "openai-tts-mcp-server": { "command": "uvx", "args": [ "openai-tts-mcp-server" ] } } ```

Development

Building and Publishing

To prepare the package for distribution:

  1. Sync dependencies and update lockfile:
uv sync
  1. Build package distributions:
uv build

This will create source and wheel distributions in the dist/ directory.

  1. Publish to PyPI:
uv publish

Note: You'll need to set PyPI credentials via environment variables or command flags:

  • Token: --token or UV_PUBLISH_TOKEN
  • Or username/password: --username/UV_PUBLISH_USERNAME and --password/UV_PUBLISH_PASSWORD

Debugging

Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.

You can launch the MCP Inspector via npm with this command:

npx @modelcontextprotocol/inspector uv --directory /Users/tomek/workspace/openai-tts-mcp-server run openai-tts-mcp-server

Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_tts_mcp_server-0.1.3.tar.gz (385.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_tts_mcp_server-0.1.3-py3-none-any.whl (374.4 kB view details)

Uploaded Python 3

File details

Details for the file openai_tts_mcp_server-0.1.3.tar.gz.

File metadata

File hashes

Hashes for openai_tts_mcp_server-0.1.3.tar.gz
Algorithm Hash digest
SHA256 5009a33f1e17d005544cb3fd9942614e62a48282a6bd4786b2f13310ed402e9c
MD5 448ca3b4a8dea87b5de5582de52fbec1
BLAKE2b-256 0ea5d5b1de6dc8c1f9f9e0c4095a07d380692283e3a25256fa4e8689606b1583

See more details on using hashes here.

File details

Details for the file openai_tts_mcp_server-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_tts_mcp_server-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7e5b189ed2c8ef34eb2391762042b9723b0eb472f5f9e29305d8f301653588db
MD5 72cf826d944ed0fb91646edc623024e5
BLAKE2b-256 412b5653e6975785bed85c98a95a1a8f1826ee136e2135e7006ef964bb6f9082

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page