A command-line interface for interacting with various Large Language Models with streaming markdown output
Project description
StreamLM
A command-line interface for interacting with various Large Language Models with beautiful markdown-formatted responses.
Design Principle: Frictionless interaction. Just type lm hello - no need for subcommands. The CLI defaults to chat mode for the fastest possible workflow.
Installation
uv (recommended)
uv tool install streamlm
PyPI
pip install streamlm
Homebrew (macOS/Linux)
brew install jeffmylife/streamlm/streamlm
Usage
Basic Usage
After installation, you can use the lm command. The CLI defaults to chat mode - just type your prompt:
lm explain quantum computing
lm -m gpt-4o "write a Python function"
lm -m claude-3-5-sonnet "analyze this data"
# Explicit 'chat' command also works
lm chat "hello world"
Gateway Routing
StreamLM supports routing requests through different gateways for cost optimization and flexibility:
# Route through Vercel AI Gateway (no markup, low latency)
lm --gateway vercel "explain quantum computing"
# Route through OpenRouter (model discovery, transparent pricing)
lm --gateway openrouter -m gpt-4o "write a function"
# Direct provider access (default, supports reasoning models)
lm --gateway direct "analyze this data"
Gateway Benefits:
- Vercel: $5/month free credits, no token markup, <20ms latency
- OpenRouter: Model discovery, pricing transparency, bring-your-own-key
- Direct: Full provider feature support, reasoning models, lowest latency
Configuration
StreamLM can be configured via config file (~/.streamlm/config.yaml), environment variables, or CLI flags:
# Interactive setup wizard
lm config setup
# Set default gateway
lm config set gateway.default vercel
# Configure gateway API keys
lm config set gateway.vercel.api_key sk-your-ai-gateway-key
# Set default model
lm config set models.default gpt-4o
# View current configuration
lm config get
# Validate configuration and API keys
lm config validate
# List available gateways
lm config list-gateways
Configuration Priority (highest to lowest):
- CLI flags (
--gateway,--model) - Environment variables (
STREAMLM_GATEWAY, provider API keys) - Config file (
~/.streamlm/config.yaml) - Defaults (direct gateway, gemini-2.5-flash model)
Model Aliases
Define shortcuts for your favorite models in the config:
# ~/.streamlm/config.yaml
models:
aliases:
gpt: "gpt-4o"
claude: "claude-3-5-sonnet"
fast: "gemini/gemini-2.5-flash"
smart: "gpt-4o"
Then use them:
lm -m fast "quick question" # Uses gemini-2.5-flash
lm -m smart "complex analysis" # Uses gpt-4o
Raw Markdown Output
StreamLM includes beautiful built-in markdown formatting, but you can also output raw markdown for piping to other tools:
# Output raw markdown without Rich formatting
lm --md "explain machine learning" > output.md
# Pipe to your favorite markdown formatter (like glow)
lm --md "write a Python tutorial" | glow
# Use with other markdown tools
lm --raw "create documentation" | pandoc -f markdown -t html
Supported Models
StreamLM provides access to various Large Language Models including:
- OpenAI: GPT-4o, o1, o3-mini, GPT-4o-mini
- Anthropic: Claude-3-7-sonnet, Claude-3-5-sonnet, Claude-3-5-haiku
- Google: Gemini-2.5-flash, Gemini-2.5-pro, Gemini-2.0-flash-thinking
- DeepSeek: DeepSeek-R1, DeepSeek-V3
- xAI: Grok-4, Grok-3-beta, Grok-3-mini-beta
- Local models: Via Ollama (Llama3.3, Qwen2.5, DeepSeek-Coder, etc.)
Chat Command Options
--model/-m: Choose the LLM model (or use alias from config)--gateway/-g: Route through gateway (direct, vercel, openrouter)--image/-i: Include image files for vision models--context/-c: Add context from a file--max-tokens/-t: Set maximum response length--temperature/-temp: Control response creativity (0.0-1.0)--think: Show reasoning process (reasoning models, direct gateway only)--session/-s: Session ID to continue conversation--session-name: Name for a new session (only when creating)--debug/-d: Enable debug mode--raw/--md: Output raw markdown without Rich formatting
Session Management
StreamLM supports conversation sessions to maintain context across multiple messages:
# Create a new session or continue an existing one
lm --session my-project "How do I implement authentication?"
lm --session my-project "Can you show me an example?" # Continues with context
# Name your session when creating it
lm --session dev-2025 --session-name "Development Session" "Let's start coding"
# List all sessions
lm sessions --list
# Show session details and conversation history
lm sessions --show my-project
# Export session to JSON
lm sessions --export my-project > session.json
# Clear messages from a session (keeps session metadata)
lm sessions --clear my-project
# Delete a session completely
lm sessions --delete my-project
Session Features:
- Automatic conversation history - context is maintained across messages
- Token usage tracking per session
- Local-first storage using libSQL (SQLite compatible)
- Optional remote sync with Turso (not required)
- Export/import sessions for backup or sharing
- Metadata support for images and context files
Config Command Actions
lm config setup: Interactive configuration wizardlm config get [key]: Get configuration valuelm config set <key> <value>: Set configuration valuelm config validate: Validate configuration and API keyslm config list-gateways: Show available gateways and their status
Features
- 🎨 Beautiful markdown-formatted responses
- 💬 Conversation sessions with persistent history
- 🌐 Gateway routing (Vercel AI Gateway, OpenRouter, or direct)
- ⚙️ Flexible configuration (config file, env vars, CLI flags)
- 🔑 Model aliases for quick access to favorite models
- 🖼️ Image input support for compatible models
- 📁 Context file support
- 🧠 Reasoning model support (DeepSeek, OpenAI o1, etc.)
- 📊 Token usage tracking per session
- 💾 Local-first database storage (no cloud required)
- 🔧 Extensive model support across providers
- ⚡ Fast and lightweight
- 🛠️ Easy configuration management
Links
License
MIT License - see LICENSE file for details.
Development
Setup
# Clone the repository
git clone https://github.com/jeffmylife/streamlm.git
cd streamlm
# Install with dev dependencies
uv pip install -e ".[dev]"
Running Tests
All tests use uv run for consistency:
# Run all tests
uv run pytest tests/ -v
# Run with coverage
uv run pytest tests/ -v --cov=src --cov-report=term-missing
# Run specific test types
uv run pytest tests/test_cli.py -v # Unit tests only
uv run pytest tests/test_integration.py -v # Integration tests only
Release Process
# Make your changes
uv version --bump patch
git add .
git commit -m "feat: your changes"
git push
# Create GitHub release (this triggers everything automatically)
gh release create v0.1.11 --generate-notes
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file streamlm-0.1.15.tar.gz.
File metadata
- Download URL: streamlm-0.1.15.tar.gz
- Upload date:
- Size: 181.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9bc729e04d9e0c1a3157ba58dc5caaeabe460af88b4cb44a3d061a28aec1fea
|
|
| MD5 |
71bfa7047feeefca55e63ebc769e1ecf
|
|
| BLAKE2b-256 |
08bb20ad7f586d19cf30d877891a9281e3290966de9d9d8f1aa56dd9ea252023
|
Provenance
The following attestation bundles were made for streamlm-0.1.15.tar.gz:
Publisher:
publish.yml on jeffmylife/streamlm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
streamlm-0.1.15.tar.gz -
Subject digest:
f9bc729e04d9e0c1a3157ba58dc5caaeabe460af88b4cb44a3d061a28aec1fea - Sigstore transparency entry: 728868852
- Sigstore integration time:
-
Permalink:
jeffmylife/streamlm@b8fe02f30f1a493ccb18914c81196aa462be15d1 -
Branch / Tag:
refs/tags/v0.1.15 - Owner: https://github.com/jeffmylife
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b8fe02f30f1a493ccb18914c81196aa462be15d1 -
Trigger Event:
release
-
Statement type:
File details
Details for the file streamlm-0.1.15-py3-none-any.whl.
File metadata
- Download URL: streamlm-0.1.15-py3-none-any.whl
- Upload date:
- Size: 37.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b05eae9485908c8923d885ee21ae4a7938788b9eafde4d35c97d56dd231e045
|
|
| MD5 |
c7dc13d78a89c11e9be8edabbf4b7bcd
|
|
| BLAKE2b-256 |
e08cefc69a50573c83d9d185ca0cb4b7bf2be5f6bf96e4f59bbdd7c98fa29b9e
|
Provenance
The following attestation bundles were made for streamlm-0.1.15-py3-none-any.whl:
Publisher:
publish.yml on jeffmylife/streamlm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
streamlm-0.1.15-py3-none-any.whl -
Subject digest:
1b05eae9485908c8923d885ee21ae4a7938788b9eafde4d35c97d56dd231e045 - Sigstore transparency entry: 728868870
- Sigstore integration time:
-
Permalink:
jeffmylife/streamlm@b8fe02f30f1a493ccb18914c81196aa462be15d1 -
Branch / Tag:
refs/tags/v0.1.15 - Owner: https://github.com/jeffmylife
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b8fe02f30f1a493ccb18914c81196aa462be15d1 -
Trigger Event:
release
-
Statement type: