Skip to main content

MCP Integration for Next Gen UI Agent

Project description

Next Gen UI MCP Server

This package wraps our NextGenUI agent in a Model Context Protocol (MCP) tool using the standard MCP SDK. Since MCP adoption is so strong these days and there is an apetite to use this protocol also for handling agentic AI, we wanted to also deliver this way of consuming our agent. The most common way of utilising MCP tools is to provide them to LLM to choose and execute with certain parameters. This approach doesn't make sense for NextGenUI agent as you want to call it at specific moment after gathering data for response and also you don't want LLM to try to pass the prompt and JSON content as it may lead to unnecessary errors in the content. It's more natural and reliable to invoke this MCP tool directly with the parameters as part of your main application logic.

Installation

pip install -U next_gen_ui_mcp

Depending on your use case you may need additional packages for inference provider or design component renderers. More about this in the next sections.

Usage

Running the standalone server:

  # Run with MCP sampling (default - leverages client's LLM)
  python -m next_gen_ui_mcp

  # Run with LlamaStack inference
  python -m next_gen_ui_mcp --provider llamastack --model llama3.2-3b --llama-url http://localhost:5001

  # Run with LangChain OpenAI inference
  python -m next_gen_ui_mcp --provider langchain --model gpt-3.5-turbo

  # Run with LangChain via Ollama (local)
  python -m next_gen_ui_mcp --provider langchain --model llama3.2 --base-url http://localhost:11434/v1 --api-key ollama

  # Run with MCP sampling and custom max tokens
  python -m next_gen_ui_mcp --sampling-max-tokens 4096

  # Run with SSE transport (for web clients)
  python -m next_gen_ui_mcp --transport sse --host 127.0.0.1 --port 8000

  # Run with streamable-http transport
  python -m next_gen_ui_mcp --transport streamable-http --host 127.0.0.1 --port 8000

  # Run with patternfly component system
  python -m next_gen_ui_mcp --component-system rhds

  # Run with rhds component system via SSE transport
  python -m next_gen_ui_mcp --transport sse --component-system rhds --port 8000

As the above examples show you can choose to configure llamastack or langchain provided. You have to add the necessary dependencies to your python environment to do so, otherwise the application will complain about them missing

Similarly pluggable component systems such as rhds also require certain imports, next_gen_ui_rhds_renderer in this particular case.

If you are running this from inside of our NextGenUI Agent GitHub repo then our pants repository manager can help you satisfy all dependencies. In such case you can run the commands in the following way:

  # Run with MCP sampling (default - leverages client's LLM)
  pants run libs/next_gen_ui_mcp/server_example.py:extended

  # Run with SSE transport and Red Hat Design System component system for rendering
  pants run libs/next_gen_ui_mcp/server_example.py:extended --run-args="--transport sse --component-system rhds"

Testing with MCP Client:

As part of the GitHub repository we also provide an example client. This example client implementation uses MCP SDK client libraries and ollama for MCP sampling inference provision.

You can run it via this command:

pants --concurrent run libs/next_gen_ui_mcp/mcp_client_example.py

The --concurrent parameter is there only to allow calling it while you use pants run for starting the server. By default pants restrict parallel invocations.

Using NextGenUI MCP Agent through Llama Stack

Llama-stack documentation for tools nicely shows how to register a MCP server but also shows the below code on how to invoke a tool directly

result = client.tool_runtime.invoke_tool(tool_name="generate_ui", kwargs=input_data)
)

Available MCP Tools

generate_ui

The main tool that wraps the entire Next Gen UI Agent functionality. This single tool handles:

  • Component selection based on user prompt and data
  • Data transformation to match selected components
  • Design system rendering to produce final UI

Parameters:

  • user_prompt (str): User's prompt which we want to enrich with UI components
  • input_data (List[Dict]): List of input data to render within the UI components

Returns:

  • List of rendered UI components ready for display

Available MCP Resources

system://info

Returns system information about the Next Gen UI Agent including:

  • Agent name
  • Component system being used
  • Available capabilities
  • Description

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

next_gen_ui_mcp-0.2.1-py3-none-any.whl (9.4 kB view details)

Uploaded Python 3

File details

Details for the file next_gen_ui_mcp-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for next_gen_ui_mcp-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c0fc1c920c443520a38299bcd772289cdce3445e8e991e0f22f66d628e3f6bae
MD5 efddbfbf69c693ae08f0d6d9926e3ae2
BLAKE2b-256 7477bd9e29c131ab6d3fd4ccb93fb99b5658ccefaed07d5cd4bd8e5d89b43cc5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page