Skip to main content

LLM chat app framework - Minimally complete. Maximally hackable

Project description

🗯️ Chatnificent

LLM chat app framework

Minimally complete. Maximally hackable.

Build production-ready, full-stack chat applications in minutes. Customize everything in hours.

Chatnificent is a Python framework built on Plotly's Dash designed to get your LLM chat applications up and running instantly, while providing a robust, decoupled architecture for unlimited customization.

Stop wrestling with UI components, state management, and backend integrations. Start building magnificent chat apps.

PyPI version PyPI Downloads DeepWiki

The Ethos

Frameworks should get out of your way.

  • Minimally Complete: Out of the box, Chatnificent provides a fully functional, stateful, multi-user chat application with sensible defaults.
  • Maximally Hackable: Every core pillar—the UI, the LLM provider, the database, the authentication, the RAG pipeline, and the core orchestration—is swappable. Customize or replace any part without fighting the framework.

Features

  • LLM Agnostic: Built-in support for OpenAI, Anthropic, Gemini, Ollama, OpenRouter, DeepSeek, and any other LLM API.
  • Flexible UI: Default Bootstrap layout, with built-in Mantine and Minimal (pure HTML) layouts. Easily customizable with any Dash components.
  • Pluggable Storage: InMemory, File-system, and SQLite included. Easily extendable to Redis, Postgres, etc.
  • Agentic Engine: The core engine manages multi-turn conversations and standardized tool calling across providers.
  • Auth Ready: Abstracted authentication layer for easy integration. No-login anonymous user auth enabled by default.
  • RTL Support: Automatic detection and rendering of Right-to-Left languages.
  • Dash Native: Leverage the full power of Plotly's Dash to integrate complex data visualizations and analytics.

Installation

To get started quickly with the default UI (Bootstrap) and the default LLM provider (OpenAI):

pip install "chatnificent[default]"

export OPENAI_API_KEY="YOUR_API_KEY"

For a minimal installation (no UI libraries or LLM SDKs included):

pip install chatnificent

Quickstart: Hello World (3 Lines)

This is a complete, working chat application.

Create a file app.py:

from chatnificent import Chatnificent

app = Chatnificent()

if __name__ == "__main__":
    app.run(debug=True)

Run it:

python app.py

Open your browser to http://127.0.0.1:8050. That's it. You have a fully functional chat UI with conversation history, mobile responsiveness, and URL-based session management.

The Pillars of Hackability

Chatnificent's architecture is built around extensible Pillars. Every major function is handled by a dedicated component adhering to a strict interface.

Pillar Description Defaults Included Implementations
LLM The brain (API calls, parsing). OpenAI (or Echo) OpenAI, Anthropic, Gemini, OpenRouter, DeepSeek, Ollama, Echo
Layout The look and feel (UI components). Bootstrap (or Minimal) Bootstrap, Mantine, Minimal (HTML)
Store The memory (Persistence). InMemory InMemory, File, SQLite
Auth The gatekeeper (User identification). Anonymous Anonymous, SingleUser
Engine The orchestrator (Request lifecycle). Synchronous Synchronous
Tools Tool/function calling capabilities. NoTool PythonTool, NoTool
Retrieval RAG knowledge retrieval. NoRetrieval NoRetrieval
URL URL parsing and routing. PathBased PathBased, QueryParams

You customize the app by injecting the implementations you need during initialization:

from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(
    llm=chat.llm.Anthropic(),
    store=chat.store.SQLite(db_path="conversations.db"),
    layout=chat.layout.Mantine()
)

Progressive Power: Swapping the Pillars

Let's evolve the "Hello World" example by swapping pillars.

Level 1: Swapping the LLM 🧠

Want to use Anthropic's Claude 3.5 Sonnet? Just swap the llm pillar.

(Requires pip install anthropic and setting ANTHROPIC_API_KEY)

from chatnificent import Chatnificent
import chatnificent as chat


app = Chatnificent(
    llm=chat.llm.Anthropic(default_model="claude-3-5-sonnet-20240620")
)

# Or try Gemini: app = Chatnificent(llm=chat.llm.Gemini())
# Or local Ollama: app = Chatnificent(llm=chat.llm.Ollama(default_model="llama3.1"))

Chatnificent handles the translation of message formats and tool-calling protocols automatically.

Level 2: Adding Persistent Storage

The default InMemory store is ephemeral. Let's use SQLite for persistence.

from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(
    store=store.SQLite(db_path="conversations.db")
)
# Or use the filesystem: store=chat.store.File(base_dir="./chat_data")

Conversations are now persisted across server restarts, and the sidebar automatically loads your history.

Level 3: Changing the Look and Feel 🎨

Don't want Bootstrap? Let's try the Mantine layout.

(Requires pip install dash-mantine-components)

from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(layout=chat.layout.Mantine())

# Or use the barebones HTML layout: layout=layout.Minimal()

Want a completely custom design? Implement the layout.Layout abstract base class. The framework ensures your custom layout integrates seamlessly, provided you include the required component IDs (e.g., input_textarea, messages_container, etc.).

Level 4: Custom Authentication

The default Anonymous auth isolates users by random user ID. You can easily implement custom logic.

from chatnificent import Chatnificent, auth

class HeaderAuth(auth.Auth):
    def get_current_user_id(self, **kwargs) -> str:
        from flask import request
        # Identify user based on a header (e.g., provided by an auth proxy)
        return request.headers.get("X-User-Id", "unknown_user")

app = Chatnificent(auth=HeaderAuth())

Level 5: The Engine (Advanced Orchestration)

The Engine orchestrates the entire request lifecycle: resolving the conversation, RAG retrieval, the agentic loop (Tools + LLM calls), and persistence.

The default Synchronous engine provides "hooks" (empty methods called at specific points) and "seams" (core logic methods) that you can override to deeply customize behavior without rewriting the core logic.

from chatnificent import Chatnificent
import chatnificent as chat
from typing import Any, Optional

# Create a custom engine by inheriting from the default
class CustomEngine(chat.engine.Synchronous):

    # 1. Override a HOOK to add monitoring/logging
    def _after_llm_call(self, llm_response: Any) -> None:
        # Example: Extract token usage if the LLM response object has a 'usage' attribute
        tokens = getattr(llm_response, 'usage', 'N/A')
        print(f"[MONITORING] LLM call complete. Tokens: {tokens}")

    # 2. Override a SEAM to modify core logic (e.g., prompt engineering)
    def _prepare_llm_payload(self, conversation, retrieval_context: Optional[str]):
        # Get the default payload (which already includes the context if present)
        payload = super()._prepare_llm_payload(conversation, retrieval_context)

        # Inject a custom system prompt if none exists
        if not any(m['role'] == 'system' for m in payload):
            payload.insert(0, {"role": "system", "content": "Be brief and professional."})
        return payload


# Initialize the app, passing the engine instance.
# Chatnificent's constructor will automatically bind the app reference to the engine.
app = Chatnificent(engine=CustomEngine())

Architecture Overview

How the pillars work together during a request:

  1. User Input: The user submits a message via the Layout.
  2. Callback Trigger: A Dash callback delegates the input to the Engine.
  3. Context Resolution: The Engine uses Auth, URL, and Store to identify the user and load the conversation history.
  4. Agentic Loop:
    • The Engine calls Retrieval to gather context (RAG).
    • The Engine sends the history and context to the LLM.
    • If the LLM requests a tool call, the Engine executes it via Tools and loops back.
    • If the LLM returns a final response, the loop exits.
  5. Persistence: The Engine saves the updated conversation via the Store.
  6. Rendering: The Engine formats the messages using the Layout and updates the client UI.

Building Your Own Pillars

The ultimate hackability comes from implementing your own pillars. Want to use MongoDB? Just implement the store.Store interface.

Example: Custom Storage Implementation

from chatnificent import Chatnificent
import chatnificent as chat
from typing import Optional, List

class MongoDBStore(chat.store.Store):
    def __init__(self, connection_string):
        # Initialize MongoDB client...
        print(f"Connecting to MongoDB at {connection_string}...")
        pass

    def load_conversation(self, user_id: str, convo_id: str) -> Optional[Conversation]:
        # Implement loading logic...
        return None

    # Implement the other required methods...
    def save_conversation(self, user_id: str, conversation: Conversation):
        pass
    def list_conversations(self, user_id: str) -> List[str]:
        return []
    def get_next_conversation_id(self, user_id: str) -> str:
        return "1"

# Use your custom implementation
# app = Chatnificent(store=MongoDBStore(connection_string="mongodb://..."))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatnificent-0.0.4.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatnificent-0.0.4-py3-none-any.whl (33.5 kB view details)

Uploaded Python 3

File details

Details for the file chatnificent-0.0.4.tar.gz.

File metadata

  • Download URL: chatnificent-0.0.4.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.9

File hashes

Hashes for chatnificent-0.0.4.tar.gz
Algorithm Hash digest
SHA256 4696526c691e32e6ea18adc8e47e39c540ec4ba4fdd20501f6b8095069283bc1
MD5 46362a04ce395eb37744b1cf0a33238a
BLAKE2b-256 053dfd7aa52db1255f142ceb74147b437849bf6d79cb1b58adbf4cf8b8714ef6

See more details on using hashes here.

File details

Details for the file chatnificent-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for chatnificent-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 a96650684c44000320a3a35e93809e3055c44ca5d7b681d06234bcbc2950f5e7
MD5 1b613810b80c4628d09a8f0618c31b76
BLAKE2b-256 0700324b758733e806ec9cd1334f7988a07c5ffc5bc8fb06d626caa58d9072a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page