Skip to main content

LangGraph integration for Next Gen UI Agent

Project description

Next Gen UI LangGraph Integration

This module is part of the Next Gen UI Agent project.

Module Category Module Status

Support for LangGraph framework.

Provides

  • NextGenUILangGraphAgent - takes all tool messages from provided LanGraph states, and process data from them into UI components.
    • Tool name is used as InputData.type for the UI Agent, so distinct configurations can be applied based on it, like input data transformations, defined UI components (dynamic ot Hand Build) etc.
    • AgentOutputState provides access to all objects from all UI Agent processing phases, including list of data processing errors.
    • If component_system is not none, then design_system_handler phase is also executed, which produces output state lists:
      • renditions and complete ui_blocks
      • Tool messages with the results, both for successfully processed data and processing errors. One AI message is produced also, containing info about that tool calls.
      • Tool messages representing successful processing contain rendered output (frontend code for the used component_system). You can use output_messages_with_ui_blocks agent configuration to get complete UIBlock here.

Installation

pip install -U next_gen_ui_langgraph

Example

Get NextGenUI agent

Use NextGenUILangGraphAgent class and just pass your model to get standard LangGraph Agent.

from next_gen_ui_langgraph import NextGenUILangGraphAgent
from langchain_openai import ChatOpenAI

llm_settings = {
    "model": "llama3.2",
    "base_url": "http://localhost:11434/v1",
    "api_key": "ollama",
    "temperature": 0,
}
model = ChatOpenAI(**llm_settings)

ngui_agent = NextGenUILangGraphAgent(model).build_graph()

Integrate NextGenUI agent in your assistant workflow

This complete example shows how movies ReAct agent get data about movie and then response is passed to Next Gen UI Agent.

import asyncio
import json
import os

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from next_gen_ui_langgraph import NextGenUILangGraphAgent

movie_toy_story = [
    {
        "movie": {
            "languages": ["English"],
            "year": 1995,
            "imdbId": "0114709",
            "runtime": 81,
            "imdbRating": 8.3,
            "movieId": "1",
            "countries": ["USA"],
            "imdbVotes": 591836,
            "title": "Toy Story",
            "url": "https://themoviedb.org/movie/862",
            "revenue": 373554033,
            "tmdbId": "862",
            "plot": "A cowboy doll is profoundly threatened and jealous when a new spaceman figure supplants him as top toy in a boy's room.",
            "posterUrl": "https://image.tmdb.org/t/p/w440_and_h660_face/uXDfjJbdP4ijW5hWSBrPrlKpxab.jpg",
            "released": "2022-11-02",
            "trailerUrl": "https://www.youtube.com/watch?v=v-PjgYDrg70",
            "budget": 30000000,
            "actors": ["Jim Varney", "Tim Allen", "Tom Hanks", "Don Rickles"],
        },
    }
]

if not os.environ.get("OPENAI_API_KEY"):
    # getpass.getpass("Enter your OpenAI API key: ")
    os.environ["OPENAI_API_KEY"] = "ollama"

llm = ChatOpenAI(model="llama3.2", base_url="http://localhost:11434/v1")


# Movies Agent
# Search movie tool
def search_movie(title: str):
    """Call to find movie.
    Args:
    title: Movie title e.g. 'Toy Story'
    """
    if "toy story" in title.lower():
        print(f"Returning JSON payload of '{title}' movie")
        return json.dumps(movie_toy_story, default=str)
    return None


movies_agent = create_react_agent(
    model=llm,
    tools=[search_movie],
    prompt="You are useful movies assistant to answer user questions",
)

# Next Gen UI Agent - Build it as Standard LangGraph agent
ngui_agent = NextGenUILangGraphAgent(model=llm).build_graph()
component_system = "json"
# component_system = "rhds" # use rhds if you have installed package next_gen_ui_rhds_renderer
ngui_cfg = {"configurable": {"component_system": component_system}}


def run() -> None:
    # Run Movies Agent to get raw movie data and answer
    prompt = "Play Toy Story movie trailer"
    # prompt = "Show me the poster of Toy Story"
    # prompt = "Tell me details about Toy Story, including poster"
    movies_response = movies_agent.invoke(
        {"messages": [{"role": "user", "content": prompt}]}
    )
    print("===Movies Text Answer===", movies_response["messages"][-1].content)

    # Run NGUI Agent to get UI component as JSON for client-side rendering
    ngui_response = asyncio.run(
        # Run Next Gen UI Agent. Pass movies agent response directly.
        ngui_agent.ainvoke(movies_response, ngui_cfg),
    )

    print(f"===Next Gen UI {component_system} Rendition===", ngui_response["renditions"][0].content)
    print(f"===Next Gen UI Block===", ngui_response["ui_blocks"][0])


if __name__ == "__main__":
    run()

Running this assistant with user's questions Play Toy Story movie trailer return this output:

===Movies Text Answer===
 Here's the answer to the original user question:

[Intro music plays]

Narrator (in a deep, dramatic voice): "In a world where toys come to life..."

[Scene: Andy's room, toys are scattered all over the floor. Woody, a pull-string cowboy toy, is centered on a shelf.]

Narrator: "One toy stands tall."

[Scene: Close-up of Woody's face]

===Next Gen UI json Rendition===
{
    'component': 'video-player',
    'id': 'call_zomga3r3',
    'title': 'Toy Story Trailer',
    'video': 'https://www.youtube.com/embed/v-PjgYDrg70',
    'video_img': 'https://img.youtube.com/vi/v-PjgYDrg70/maxresdefault.jpg'
}

===Next Gen UI Block===
{
    "id": "iuhekjge",
    "rendering": {
        "component_system": "json",
        "mime_type": "application/json",
        "content": "{'component':'videoplayer', ...}"
    },
    "configuration": {
        ...
    }
}

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

next_gen_ui_langgraph-0.4.0-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file next_gen_ui_langgraph-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for next_gen_ui_langgraph-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7254f84f58c8b080f844099e52bf38af8e78e56da7e1db0fa2c75f72dcd7d3da
MD5 226db686773f838fa0dc75bc15331357
BLAKE2b-256 9aace799b5dcf1662814ff98b23a623b4d042fae1f577dd3f138acb3df5e063b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page