Skip to main content

A pretty command line interface for LLM chat.

Project description

Chatline

A lightweight CLI library for building terminal-based LLM chat interfaces with minimal effort. Provides rich text styling, animations, and conversation state management.

  • Terminal UI: Rich text formatting with styled quotes, brackets, emphasis, and more
  • Response Streaming: Real-time streamed responses with loading animations
  • State Management: Conversation history with edit and retry functionality
  • Multiple Providers: Run with AWS Bedrock, OpenRouter, or connect to a custom backend
  • Keyboard Shortcuts: Ctrl+E to edit previous message, Ctrl+R to retry

Installation

pip install chatline

With Poetry:

poetry add chatline

Usage

There are two modes: Embedded (with built-in providers) and Remote (requires response generation endpoint).

Embedded Mode with AWS Bedrock (Default)

The easiest way to get started is to use the embedded generator with AWS Bedrock (the default provider):

from chatline import Interface

chat = Interface()

chat.start()

For more customization:

from chatline import Interface

# Initialize with AWS Bedrock (default provider)
chat = Interface(
    provider="bedrock",  # Optional: this is the default
    provider_config={
        "region": "us-west-2",  
        "model_id": "anthropic.claude-3-5-haiku-20241022-v1:0", 
        "profile_name": "development", 
        "timeout": 120  
    }
)

# Add optional welcome message
chat.preface(
    "Welcome", 
    title="My App", 
    border_color="green")

# Start the conversation with custom system and user messages
chat.start([
    {"role": "system", "content": "You are a friendly AI assistant that specializes in code generation."},
    {"role": "user", "content": "Can you help me with a Python project?"}
])

Embedded Mode with OpenRouter

You can also use OpenRouter as your provider: (Just make sure to set your OPENROUTER_API_KEY environment variable first)

from chatline import Interface

# Initialize with OpenRouter provider
chat = Interface(
    provider="openrouter",
    provider_config={
        "model": "deepseek/deepseek-chat-v3-0324", 
        "temperature": 0.7, 
        "top_p": 0.9, 
        "frequency_penalty": 0.5, 
        "presence_penalty": 0.5,
        "timeout": 60 
    }
)

chat.start()

Remote Mode (Custom Backend)

You can also connect to a custom backend by providing the endpoint URL:

from chatline import Interface

# Initialize with remote mode
chat = Interface(endpoint="http://localhost:8000/chat")

# Start the conversation with custom system and user messages
chat.start()

Setting Up a Backend Server

You can use generate_stream function (or build your own) in your backend. Here's an example in a FastAPI server:

import json
import uvicorn
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from chatline import generate_stream

app = FastAPI()

provider_config = {
    "model": "mistralai/mixtral-8x7b-instruct"
}

@app.post("/chat")
async def stream_chat(request: Request):
    body = await request.json()
    state = body.get('conversation_state', {})
    messages = state.get('messages', [])
    
    # Process the request and update state as needed
    state['server_turn'] = state.get('server_turn', 0) + 1
    
    # Return streaming response with updated state
    headers = {
        'Content-Type': 'text/event-stream',
        'X-Conversation-State': json.dumps(state)
    }
    
    return StreamingResponse(
        generate_stream(
            messages, 
            provider="openrouter",
            provider_config=provider_config
        ),
        headers=headers,
        media_type="text/event-stream"
    )

if __name__ == "__main__":
    uvicorn.run("server:app", host="127.0.0.1", port=8000)

Acknowledgements

Chatline was built with plenty of LLM assistance, particularly from Anthropic, Mistral and Continue.dev.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatline-0.2.2.tar.gz (33.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatline-0.2.2-py3-none-any.whl (45.5 kB view details)

Uploaded Python 3

File details

Details for the file chatline-0.2.2.tar.gz.

File metadata

  • Download URL: chatline-0.2.2.tar.gz
  • Upload date:
  • Size: 33.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for chatline-0.2.2.tar.gz
Algorithm Hash digest
SHA256 5f9b2ac45d66ec766c3d0ce1b2a43cf3a91019f7b78c4ab9c82c6958ee4e5c9e
MD5 16c8bf1dbbee6a403b0fa164a17817ef
BLAKE2b-256 8875587fcc046c6a3b5a57e66b34fb36e58f425b9fdd8558224e12cc4e23f149

See more details on using hashes here.

File details

Details for the file chatline-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: chatline-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 45.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for chatline-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 470a5ed4c1a4b220d0fc0fbd7b4b3c4b488271dcdb3dae5438d5d05e158725d8
MD5 a9f48e1309e20cadd227b32fe034a333
BLAKE2b-256 9401a3ad4f5adcd3a1ae5b18f7e64911eb0aef27b482780a6f168d6317e2cc59

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page