A pretty command line interface for LLM chat.
Project description
Chatline
A lightweight CLI library for building terminal-based LLM chat interfaces with minimal effort. Provides rich text styling, animations, and conversation state management.
- Terminal UI: Rich text formatting with styled quotes, brackets, emphasis, and more
- Response Streaming: Real-time streamed responses with loading animations
- State Management: Conversation history with edit and retry functionality
- Dual Modes: Run with embedded AWS Bedrock or connect to a custom backend
- Keyboard Shortcuts: Ctrl+E to edit previous message, Ctrl+R to retry
Installation
pip install chatline
With Poetry:
poetry add chatline
Using the embedded generator requires AWS credentials configured. You can configure AWS credentials using environment variables or by setting them in your shell configuration file.
Usage
There are two modes: Embedded (no external dependencies) and Remote (requires response generation endpoint).
Embedded Mode (AWS Bedrock)
The easiest way to get started is to use the embedded generator (with AWS Bedrock):
from chatline import Interface
chat = Interface()
chat.start()
For more customization, you can configure initial messages, AWS settings, logging, and a welcome message:
from chatline import Interface
# Initialize with embedded mode with all available configuration options
chat = Interface(
# AWS Configuration
aws_config={
"region": "us-west-2", # Optional: defaults to AWS_REGION env var or us-west-2
"model_id": "anthropic.claude-3-5-haiku-20241022-v1:0", # Optional: defaults to Claude 3.5 Haiku
"profile_name": "development", # Optional: use specific AWS profile
"timeout": 120 # Optional: request timeout in seconds
},
# Logging Configuration
logging_enabled=True, # Enable detailed logging
log_file="logs/chatline_debug.log", # Output file for logs
)
# Add optional welcome message
chat.preface(
"Welcome",
title="My App",
border_color="green")
# Start the conversation with custom system and user messages
chat.start([
{"role": "system", "content": "You are a friendly AI assistant that specializes in code generation."},
{"role": "user", "content": "Can you help me with a Python project?"}
])
Remote Mode (Custom Backend)
You can also connect to a custom backend by providing the endpoint URL:
from chatline import Interface
# Initialize with remote mode
chat = Interface(endpoint="http://localhost:8000/chat")
# Start the conversation with custom system and user messages
chat.start()
Setting Up a Backend Server
You can use generate_stream function (or build your own) in your backend. Here's an example in a FastAPI server:
import json
import uvicorn
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from chatline import generate_stream
app = FastAPI()
# Define AWS configuration
aws_config = {
"model_id": "anthropic.claude-3-sonnet-20240229-v1:0",
"region": "us-east-1" # replace with your AWS region
}
@app.post("/chat")
async def stream_chat(request: Request):
body = await request.json()
state = body.get('conversation_state', {})
messages = state.get('messages', [])
# Process the request and update state as needed
state['server_turn'] = state.get('server_turn', 0) + 1
# Return streaming response with updated state
headers = {
'Content-Type': 'text/event-stream',
'X-Conversation-State': json.dumps(state)
}
return StreamingResponse(
generate_stream(messages, aws_config=aws_config), # Pass aws_config to generate_stream
headers=headers,
media_type="text/event-stream"
)
if __name__ == "__main__":
uvicorn.run("server:app", host="127.0.0.1", port=8000)
Acknowledgements
Chatline was built with plenty of LLM assistance, particularly from (Anthropic)[https://github.com/anthropics], (Mistral)[https://github.com/mistralai] and (Continue.dev)[https://github.com/continuedev/continue].
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chatline-0.0.6.tar.gz.
File metadata
- Download URL: chatline-0.0.6.tar.gz
- Upload date:
- Size: 30.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.3.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
33d84841a310ca82763376f4efedef4d3833e4a98da54035034c2353661e7201
|
|
| MD5 |
8612e70bb761a07d97849f6186991acf
|
|
| BLAKE2b-256 |
66003fdefdcf37841ed329adbb899c9d060ddf036579e39c87323b489c4a20dd
|
File details
Details for the file chatline-0.0.6-py3-none-any.whl.
File metadata
- Download URL: chatline-0.0.6-py3-none-any.whl
- Upload date:
- Size: 39.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.3.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2801c8ed9049712106bbb4c58731e5e78d0264860e6c08604e3870a949abc08
|
|
| MD5 |
892a9f689c959e0a81228ac71da6018b
|
|
| BLAKE2b-256 |
76c39f4adbdc9e699d91fa2456fa441dd7d86b965089d84aa19e0fadcedab1a9
|