Skip to main content

MCP Integration for Next Gen UI Agent

Project description

Next Gen UI MCP Server Library

This module is part of the Next Gen UI Agent project.

Module Category Module Status

This package wraps Next Gen UI Agent in a Model Context Protocol (MCP) tools using the official Python MCP SDK.

Since MCP adoption is so strong these days and there is an apetite to use this protocol also for handling agentic AI, we also deliver UI Agent this way. The most common way of utilising MCP tools is to provide them to LLM to choose and execute with certain parameters. This approach doesn't make too much sense for Next Gen UI Agent, as you want to call it at the specific moment, after gathering structured backend data for response. Also you don't want LLM to try to pass the prompt and JSON content as it may lead to unnecessary errors in the content. It's more natural and reliable to invoke this MCP tool directly with the parameters as part of your main application logic, also saving LLM tokens/price.

Provides

  • __main__.py to run the MCP server as the standalone server
  • NextGenUIMCPAgent to embed the UI Agent MCP server into your python code

Installation

pip install -U next_gen_ui_mcp

Depending on your use case you may need additional packages for inference provider or design component renderers. More about this in the next sections.

Usage

Running the standalone server:

  # Run with MCP sampling (default - leverages client's LLM)
  python -m next_gen_ui_mcp

  # Run with LlamaStack inference
  python -m next_gen_ui_mcp --provider llamastack --model llama3.2-3b --llama-url http://localhost:5001

  # Run with LangChain OpenAI inference
  python -m next_gen_ui_mcp --provider langchain --model gpt-3.5-turbo

  # Run with LangChain via Ollama (local)
  python -m next_gen_ui_mcp --provider langchain --model llama3.2 --base-url http://localhost:11434/v1 --api-key ollama

  # Run with MCP sampling and custom max tokens
  python -m next_gen_ui_mcp --sampling-max-tokens 4096

  # Run with SSE transport (for web clients)
  python -m next_gen_ui_mcp --transport sse --host 127.0.0.1 --port 8000

  # Run with streamable-http transport
  python -m next_gen_ui_mcp --transport streamable-http --host 127.0.0.1 --port 8000

  # Run with patternfly component system
  python -m next_gen_ui_mcp --component-system rhds

  # Run with rhds component system via SSE transport
  python -m next_gen_ui_mcp --transport sse --component-system rhds --port 8000

As the above examples show you can choose to configure llamastack or langchain provided. You have to add the necessary dependencies to your python environment to do so, otherwise the application will complain about them missing

Similarly pluggable component systems such as rhds also require certain imports, next_gen_ui_rhds_renderer in this particular case.

If you are running this from inside of our NextGenUI Agent GitHub repo then our pants repository manager can help you satisfy all dependencies. In such case you can run the commands in the following way:

  # Run with MCP sampling (default - leverages client's LLM)
  pants run libs/next_gen_ui_mcp/server_example.py:extended

  # Run with streamable-http transport and Red Hat Design System component system for rendering
  pants run libs/next_gen_ui_mcp/server_example.py:extended --run-args="--transport streamable-http --component-system rhds"

Testing with MCP Client:

As part of the GitHub repository we also provide an example client. This example client implementation uses MCP SDK client libraries and ollama for MCP sampling inference provision.

You can run it via this command:

pants --concurrent run libs/next_gen_ui_mcp/mcp_client_example.py

The --concurrent parameter is there only to allow calling it while you use pants run for starting the server. By default pants restrict parallel invocations.

Using NextGenUI MCP Agent through Llama Stack

Llama-stack documentation for tools nicely shows how to register a MCP server but also shows the below code on how to invoke a tool directly

result = client.tool_runtime.invoke_tool(tool_name="generate_ui", kwargs=input_data)
)

Available MCP Tools

generate_ui

The main tool that wraps the entire Next Gen UI Agent functionality.

This single tool handles:

  • Component selection based on user prompt and data
  • Data transformation to match selected components
  • Design system rendering to produce final UI

Parameters:

  • user_prompt (str): User's prompt which we want to enrich with UI components
  • input_data (List[Dict]): List of input data to render within the UI components

Returns:

  • List of rendered UI components ready for display

Available MCP Resources

system://info

Returns system information about the Next Gen UI Agent including:

  • Agent name
  • Component system being used
  • Available capabilities
  • Description

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

next_gen_ui_mcp-0.2.2-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file next_gen_ui_mcp-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for next_gen_ui_mcp-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d47845a8d60f7f41f539145763545fd364658d05e6519e4a99dcc860b9ff1ce8
MD5 685eb32373144d3f10ff159b2329b057
BLAKE2b-256 c5377b9e165ec9b0744ad149fe36699590175bca336d619f33729e01b521856c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page