Skip to main content

A Streamlit UI for LLM chat applications with persistence and chat history

Project description

UI4AI

PyPI version License: MIT

A simple, lightweight, and plug-and-play Streamlit-based UI for LLM chatbot applications with ChatGPT-style features.


Features

  • Plug in your own generate_response or streaming generate_response_stream function
  • Built-in sidebar history and session management
  • Welcome screen with optional suggestion chips when starting a new chat
  • Custom avatars for user and assistant messages
  • Optional: title generation, token counting, max history control
  • Editable conversation titles and persistent sessions (survives refresh/restart)
  • Optional conversation search

Installation

pip install UI4AI

Minimal usage (frontend only, no API key)

from UI4AI import run_chat

def generate_response(messages):
    return f"You said: {messages[-1]['content']}"

run_chat(generate_response=generate_response)

Run: streamlit run app.py

For OpenAI: use run_chat_openai() (requires API key).


Basic usage (customize with parameters)

Use run_chat() with your own generate_response:

from UI4AI import run_chat
from openai import OpenAI

client = OpenAI(api_key="<YOUR_API_KEY>")

def generate_response(messages):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        temperature=0.7,
    )
    return response.choices[0].message.content or ""

run_chat(
    generate_response=generate_response,
    page_title="My Chatbot",
    header_title="My Chatbot",
)

Or use run_chat_openai() and override any parameter:

run_chat_openai(
    page_title="My Bot",
    system_prompt="You are a helpful assistant.",
    model="gpt-4o",
)

Run the app: streamlit run app.py


Examples

File Description
examples/minimal_example.py Frontend only, no API key
examples/simple_example.py Echo bot with more responses
examples/openai_example.py OpenAI (requires API key)
examples/base_example.py Full template with all parameters documented

Streaming (typewriter effect)

Pass a generator that yields string chunks to get a ChatGPT-like streaming response:

from UI4AI import run_chat
from openai import OpenAI

client = OpenAI(api_key="<YOUR_API_KEY>")

def generate_response_stream(messages):
    stream = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        stream=True,
    )
    for chunk in stream:
        if chunk.choices[0].delta.content:
            yield chunk.choices[0].delta.content

run_chat(
    generate_response_stream=generate_response_stream,
    page_title="Streaming Chat",
)

Welcome screen and avatars

run_chat(
    generate_response=my_response_fn,
    welcome_title="Welcome",
    welcome_message="How can I help you today?",
    suggestions=[
        "Explain quantum computing in simple terms",
        "Write a short poem",
        "Help me debug this code",
    ],
    user_avatar="🧑",
    assistant_avatar="🤖",
)

run_chat_openai parameters

Parameter Type Default Description
api_key str or None None OpenAI API key (or use OPENAI_API_KEY env / sidebar)
model str "gpt-4o-mini" Model name
temperature float 0.7 Sampling temperature
stream bool False Use streaming responses
**kwargs Any run_chat parameter (e.g. page_title, system_prompt)

run_chat parameter reference

Parameter Type Default Description
generate_response Callable[[List[Dict]], str] or None None Function that takes messages and returns the full response text.
generate_response_stream Callable[[List[Dict]], Iterator[str]] or None None Generator that yields response chunks (used for streaming; takes priority over generate_response).
generate_title Callable[[str], str] or None None Generates a conversation title from the first user message.
count_tokens Callable[[List[Dict]], int] or None None Returns token count for the conversation (enables token display and max_history_tokens).
page_title str "AI Chat" Browser tab title.
header_title str "UI4AI" Sidebar header title.
byline_text str "Powered by Kethan Dosapati" Byline under the header.
layout str "wide" Streamlit layout: "wide" or "centered".
new_conversation_label str "➕ New Chat" Label for the new conversation button.
chat_placeholder str "Ask me anything..." Placeholder for the chat input.
spinner_text str "Thinking..." Text shown while generating (non-streaming).
max_history_tokens int or None None Max tokens to keep in context (requires count_tokens).
show_edit_options bool True Show edit/delete in conversation menu.
primary_color str "#4f8bf9" Primary UI color.
hover_color str "#f0f2f6" Hover color.
date_grouping bool True Group conversations by date in sidebar.
show_token_count bool True Show token count per conversation.
max_title_length int 25 Max length of conversation title in sidebar.
storage_path str or None None Custom path for conversation JSON file.
system_prompt str or None None System message added to each conversation.
enable_search bool False Enable conversation search in sidebar.
user_avatar str or None None Avatar for user (emoji, :material/icon_name:, or image URL).
assistant_avatar str or None None Avatar for assistant.
welcome_title str "Welcome" Title shown when there are no messages.
welcome_message str "How can I help you today?" Message shown on welcome screen.
suggestions List[str] or None None Optional suggestion chips on welcome screen.
setup_callback Callable or None None Called after set_page_config (e.g. for API key prompt). Use when you need st.* before the main UI.

Additional features

  • Title generation — Automatically generates a conversation title from the first message when generate_title is provided.
  • Token counting — Displays total token count per conversation when count_tokens is provided.
  • Max history — Use max_history_tokens with count_tokens to limit context length (older messages are truncated).
  • Editable titles — Rename conversations from the sidebar menu.
  • Persistent sessions — Conversations are saved to a JSON file and persist across refreshes and restarts.
  • Sidebar history — Switch between past conversations in the sidebar.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ui4ai-0.2.1.tar.gz (20.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ui4ai-0.2.1-py3-none-any.whl (19.1 kB view details)

Uploaded Python 3

File details

Details for the file ui4ai-0.2.1.tar.gz.

File metadata

  • Download URL: ui4ai-0.2.1.tar.gz
  • Upload date:
  • Size: 20.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ui4ai-0.2.1.tar.gz
Algorithm Hash digest
SHA256 ac83fc8f9717944b735d9a0a305c32b263a1eaf309f21cd8bfd8c51317da18da
MD5 4a9f71f307611b4e4a37879097413ccd
BLAKE2b-256 63e88b8b0405654bcb85578e13ab39fa195906f9cf4cb9cb2ed6911dd479384e

See more details on using hashes here.

File details

Details for the file ui4ai-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: ui4ai-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 19.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ui4ai-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a279ea387e6e33667f708aef22faa83d1f2e8d2cc24a8aad0df4dffa4f928c10
MD5 df74508da3cb242dbbe9edb5f1fcbf34
BLAKE2b-256 cdc6dec70414accb77b3b3a5e5c74284f21ea2df1393f8ddb4d832ac6cff3f14

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page