Skip to main content

Automatic MCP Server & OpenAI Tools Bridge for apcore

Project description

apcore-mcp

Automatic MCP Server & OpenAI Tools Bridge for apcore.

apcore-mcp turns any apcore-based project into an MCP Server and OpenAI tool provider — with zero code changes to your existing project.

┌──────────────────┐
│  django-apcore   │  ← your existing apcore project (unchanged)
│  flask-apcore    │
│  ...             │
└────────┬─────────┘
         │  extensions directory
         ▼
┌──────────────────┐
│    apcore-mcp    │  ← just install & point to extensions dir
└───┬──────────┬───┘
    │          │
    ▼          ▼
  MCP       OpenAI
 Server      Tools

Design Philosophy

  • Zero intrusion — your apcore project needs no code changes, no imports, no dependencies on apcore-mcp
  • Zero configuration — point to an extensions directory, everything is auto-discovered
  • Pure adapter — apcore-mcp reads from the apcore Registry; it never modifies your modules
  • Works with any xxx-apcore project — if it uses the apcore Module Registry, apcore-mcp can serve it

Installation

Install apcore-mcp alongside your existing apcore project:

pip install apcore-mcp

That's it. Your existing project requires no changes.

Requires Python 3.10+ and apcore >= 0.5.0.

Quick Start

Zero-code approach (CLI)

If you already have an apcore-based project with an extensions directory, just run:

apcore-mcp --extensions-dir /path/to/your/extensions

All modules are auto-discovered and exposed as MCP tools. No code needed.

Programmatic approach (Python API)

For tighter integration or when you need filtering/OpenAI output:

from apcore import Registry
from apcore_mcp import serve, to_openai_tools

registry = Registry(extensions_dir="./extensions")
registry.discover()

# Launch as MCP Server
serve(registry)

# Or export as OpenAI tools
tools = to_openai_tools(registry)

Integration with Existing Projects

Typical apcore project structure

your-project/
├── extensions/          ← modules live here
│   ├── image_resize/
│   ├── text_translate/
│   └── ...
├── your_app.py          ← your existing code (untouched)
└── ...

Adding MCP support

No changes to your project. Just run apcore-mcp alongside it:

# Install (one time)
pip install apcore-mcp

# Run
apcore-mcp --extensions-dir ./extensions

Your existing application continues to work exactly as before. apcore-mcp operates as a separate process that reads from the same extensions directory.

Adding OpenAI tools support

For OpenAI integration, a thin script is needed — but still no changes to your existing modules:

from apcore import Registry
from apcore_mcp import to_openai_tools

registry = Registry(extensions_dir="./extensions")
registry.discover()

tools = to_openai_tools(registry)
# Use with openai.chat.completions.create(tools=tools)

MCP Client Configuration

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "apcore": {
      "command": "apcore-mcp",
      "args": ["--extensions-dir", "/path/to/your/extensions"]
    }
  }
}

Claude Code

Add to .mcp.json in your project root:

{
  "mcpServers": {
    "apcore": {
      "command": "apcore-mcp",
      "args": ["--extensions-dir", "./extensions"]
    }
  }
}

Cursor

Add to .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "apcore": {
      "command": "apcore-mcp",
      "args": ["--extensions-dir", "./extensions"]
    }
  }
}

Remote HTTP access

apcore-mcp --extensions-dir ./extensions \
    --transport streamable-http \
    --host 0.0.0.0 \
    --port 9000

Connect any MCP client to http://your-host:9000/mcp.

CLI Reference

apcore-mcp --extensions-dir PATH [OPTIONS]
Option Default Description
--extensions-dir (required) Path to apcore extensions directory
--transport stdio Transport: stdio, streamable-http, or sse
--host 127.0.0.1 Host for HTTP-based transports
--port 8000 Port for HTTP-based transports (1-65535)
--name apcore-mcp MCP server name (max 255 chars)
--version package version MCP server version string
--log-level INFO Logging: DEBUG, INFO, WARNING, ERROR

Exit codes: 0 normal, 1 invalid arguments, 2 startup failure.

Python API Reference

serve()

from apcore_mcp import serve

serve(
    registry_or_executor,        # Registry or Executor
    transport="stdio",           # "stdio" | "streamable-http" | "sse"
    host="127.0.0.1",           # host for HTTP transports
    port=8000,                   # port for HTTP transports
    name="apcore-mcp",          # server name
    version=None,                # defaults to package version
    on_startup=None,             # callback before transport starts
    on_shutdown=None,            # callback after transport completes
    tags=None,                   # filter modules by tags
    prefix=None,                 # filter modules by ID prefix
    log_level=None,              # logging level ("DEBUG", "INFO", etc.)
    validate_inputs=False,       # validate inputs against schemas
    metrics_collector=None,      # MetricsCollector for /metrics endpoint
)

Accepts either a Registry or Executor. When a Registry is passed, an Executor is created automatically.

/metrics Prometheus Endpoint

When metrics_collector is provided to serve(), a /metrics HTTP endpoint is exposed that returns metrics in Prometheus text exposition format.

  • Available on HTTP-based transports only (streamable-http, sse). Not available with stdio transport.
  • Returns Prometheus text format with Content-Type text/plain; version=0.0.4; charset=utf-8.
  • Returns 404 when no metrics_collector is configured.
from apcore.observability import MetricsCollector
from apcore_mcp import serve

collector = MetricsCollector()
serve(registry, transport="streamable-http", metrics_collector=collector)
# GET http://127.0.0.1:8000/metrics -> Prometheus text format

to_openai_tools()

from apcore_mcp import to_openai_tools

tools = to_openai_tools(
    registry_or_executor,       # Registry or Executor
    embed_annotations=False,    # append annotation hints to descriptions
    strict=False,               # OpenAI Structured Outputs strict mode
    tags=None,                  # filter by tags, e.g. ["image"]
    prefix=None,                # filter by module ID prefix, e.g. "image"
)

Returns a list of dicts directly usable with the OpenAI API:

import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Resize the image to 512x512"}],
    tools=tools,
)

Strict mode (strict=True): sets additionalProperties: false, makes all properties required (optional ones become nullable), removes defaults.

Annotation embedding (embed_annotations=True): appends [Annotations: read_only, idempotent] to descriptions.

Filtering: tags=["image"] or prefix="text" to expose a subset of modules.

Using with an Executor

If you need custom middleware, ACL, or execution configuration:

from apcore import Registry, Executor

registry = Registry(extensions_dir="./extensions")
registry.discover()
executor = Executor(registry)

serve(executor)
tools = to_openai_tools(executor)

Features

  • Auto-discovery — all modules in the extensions directory are found and exposed automatically
  • Three transports — stdio (default, for desktop clients), Streamable HTTP, and SSE
  • Annotation mapping — apcore annotations (readonly, destructive, idempotent) map to MCP ToolAnnotations
  • Schema conversion — JSON Schema $ref/$defs inlining, strict mode for OpenAI Structured Outputs
  • Error sanitization — ACL errors and internal errors are sanitized; stack traces are never leaked
  • Dynamic registration — modules registered/unregistered at runtime are reflected immediately
  • Dual output — same registry powers both MCP Server and OpenAI tool definitions

How It Works

Mapping: apcore to MCP

apcore MCP
module_id Tool name
description Tool description
input_schema inputSchema
annotations.readonly ToolAnnotations.readOnlyHint
annotations.destructive ToolAnnotations.destructiveHint
annotations.idempotent ToolAnnotations.idempotentHint
annotations.open_world ToolAnnotations.openWorldHint

Mapping: apcore to OpenAI Tools

apcore OpenAI
module_id (image.resize) name (image-resize)
description description
input_schema parameters

Module IDs with dots are normalized to dashes for OpenAI compatibility (bijective mapping).

Architecture

Your apcore project (unchanged)
    │
    │  extensions directory
    ▼
apcore-mcp (separate process / library call)
    │
    ├── MCP Server path
    │     SchemaConverter + AnnotationMapper
    │       → MCPServerFactory → ExecutionRouter → TransportManager
    │
    └── OpenAI Tools path
          SchemaConverter + AnnotationMapper + IDNormalizer
            → OpenAIConverter → list[dict]

Development

git clone https://github.com/aipartnerup/apcore-mcp-python.git
cd apcore-mcp
pip install -e ".[dev]"
pytest                           # 260 tests
pytest --cov                     # with coverage report

Project Structure

src/apcore_mcp/
├── __init__.py              # Public API: serve(), to_openai_tools()
├── __main__.py              # CLI entry point
├── adapters/
│   ├── schema.py            # JSON Schema conversion ($ref inlining)
│   ├── annotations.py       # Annotation mapping (apcore → MCP/OpenAI)
│   ├── errors.py            # Error sanitization
│   └── id_normalizer.py     # Module ID normalization (dot ↔ dash)
├── converters/
│   └── openai.py            # OpenAI tool definition converter
└── server/
    ├── factory.py           # MCP Server creation and tool building
    ├── router.py            # Tool call → Executor routing
    ├── transport.py         # Transport management (stdio/HTTP/SSE)
    └── listener.py          # Dynamic module registration listener

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apcore_mcp-0.5.0.tar.gz (150.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

apcore_mcp-0.5.0-py3-none-any.whl (35.3 kB view details)

Uploaded Python 3

File details

Details for the file apcore_mcp-0.5.0.tar.gz.

File metadata

  • Download URL: apcore_mcp-0.5.0.tar.gz
  • Upload date:
  • Size: 150.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for apcore_mcp-0.5.0.tar.gz
Algorithm Hash digest
SHA256 987e899a8b50ab8dc79809857f2d3ba0da4967535e630e2c3f9a109dc4d207d2
MD5 d22f30b7071a01861a95e27a701dec73
BLAKE2b-256 46503adbeddf4f3ca37b31184772d3f3300ee36f9225570fa232de656994a6d4

See more details on using hashes here.

File details

Details for the file apcore_mcp-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: apcore_mcp-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 35.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for apcore_mcp-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f92f6c037710c11fa678ddf7726f37e830111b861de095147b3867846fd3a557
MD5 90106a3699524ace320553da0fd75fc4
BLAKE2b-256 e3faebb12cb886883219fb99aa6255f05d212b014fa677ba8d3d2ac2bd2e7977

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page