Skip to main content

A command-line interface for interacting with various Large Language Models with streaming markdown output

Project description

StreamLM

Downloads PyPI version GitHub Release Build Status Code style: black

A command-line interface for interacting with various Large Language Models with beautiful markdown-formatted responses.

Design Principle: Frictionless interaction. Just type lm hello - no need for subcommands. The CLI defaults to chat mode for the fastest possible workflow.

Installation

uv (recommended)

uv tool install streamlm

PyPI

pip install streamlm

Homebrew (macOS/Linux)

brew install jeffmylife/streamlm/streamlm

Usage

Basic Usage

After installation, you can use the lm command. The CLI defaults to chat mode - just type your prompt:

lm explain quantum computing
lm -m gpt-4o "write a Python function"
lm -m claude-3-5-sonnet "analyze this data"

# Explicit 'chat' command also works
lm chat "hello world"

Gateway Routing

StreamLM supports routing requests through different gateways for cost optimization and flexibility:

# Route through Vercel AI Gateway (no markup, low latency)
lm --gateway vercel "explain quantum computing"

# Route through OpenRouter (model discovery, transparent pricing)
lm --gateway openrouter -m gpt-4o "write a function"

# Direct provider access (default, supports reasoning models)
lm --gateway direct "analyze this data"

Gateway Benefits:

  • Vercel: $5/month free credits, no token markup, <20ms latency
  • OpenRouter: Model discovery, pricing transparency, bring-your-own-key
  • Direct: Full provider feature support, reasoning models, lowest latency

Configuration

StreamLM can be configured via config file (~/.streamlm/config.yaml), environment variables, or CLI flags:

# Interactive setup wizard
lm config setup

# Set default gateway
lm config set gateway.default vercel

# Configure gateway API keys
lm config set gateway.vercel.api_key sk-your-ai-gateway-key

# Set default model
lm config set models.default gpt-4o

# View current configuration
lm config get

# Validate configuration and API keys
lm config validate

# List available gateways
lm config list-gateways

Configuration Priority (highest to lowest):

  1. CLI flags (--gateway, --model)
  2. Environment variables (STREAMLM_GATEWAY, provider API keys)
  3. Config file (~/.streamlm/config.yaml)
  4. Defaults (direct gateway, gemini-2.5-flash model)

Model Aliases

Define shortcuts for your favorite models in the config:

# ~/.streamlm/config.yaml
models:
  aliases:
    gpt: "gpt-4o"
    claude: "claude-3-5-sonnet"
    fast: "gemini/gemini-2.5-flash"
    smart: "gpt-4o"

Then use them:

lm -m fast "quick question"    # Uses gemini-2.5-flash
lm -m smart "complex analysis"  # Uses gpt-4o

Raw Markdown Output

StreamLM includes beautiful built-in markdown formatting, but you can also output raw markdown for piping to other tools:

# Output raw markdown without Rich formatting
lm --md "explain machine learning" > output.md

# Pipe to your favorite markdown formatter (like glow)
lm --md "write a Python tutorial" | glow

# Use with other markdown tools
lm --raw "create documentation" | pandoc -f markdown -t html

Supported Models

StreamLM provides access to various Large Language Models including:

  • OpenAI: GPT-4o, o1, o3-mini, GPT-4o-mini
  • Anthropic: Claude-3-7-sonnet, Claude-3-5-sonnet, Claude-3-5-haiku
  • Google: Gemini-2.5-flash, Gemini-2.5-pro, Gemini-2.0-flash-thinking
  • DeepSeek: DeepSeek-R1, DeepSeek-V3
  • xAI: Grok-4, Grok-3-beta, Grok-3-mini-beta
  • Local models: Via Ollama (Llama3.3, Qwen2.5, DeepSeek-Coder, etc.)

Chat Command Options

  • --model / -m: Choose the LLM model (or use alias from config)
  • --gateway / -g: Route through gateway (direct, vercel, openrouter)
  • --image / -i: Include image files for vision models
  • --context / -c: Add context from a file
  • --max-tokens / -t: Set maximum response length
  • --temperature / -temp: Control response creativity (0.0-1.0)
  • --think: Show reasoning process (reasoning models, direct gateway only)
  • --debug / -d: Enable debug mode
  • --raw / --md: Output raw markdown without Rich formatting

Config Command Actions

  • lm config setup: Interactive configuration wizard
  • lm config get [key]: Get configuration value
  • lm config set <key> <value>: Set configuration value
  • lm config validate: Validate configuration and API keys
  • lm config list-gateways: Show available gateways and their status

Features

  • 🎨 Beautiful markdown-formatted responses
  • 🌐 Gateway routing (Vercel AI Gateway, OpenRouter, or direct)
  • ⚙️ Flexible configuration (config file, env vars, CLI flags)
  • 🔑 Model aliases for quick access to favorite models
  • 🖼️ Image input support for compatible models
  • 📁 Context file support
  • 🧠 Reasoning model support (DeepSeek, OpenAI o1, etc.)
  • 🔧 Extensive model support across providers
  • ⚡ Fast and lightweight
  • 🛠️ Easy configuration management

Links

License

MIT License - see LICENSE file for details.

Development

Setup

# Clone the repository
git clone https://github.com/jeffmylife/streamlm.git
cd streamlm

# Install with dev dependencies
uv pip install -e ".[dev]"

Running Tests

All tests use uv run for consistency:

# Run all tests
uv run pytest tests/ -v

# Run with coverage
uv run pytest tests/ -v --cov=src --cov-report=term-missing

# Run specific test types
uv run pytest tests/test_cli.py -v                    # Unit tests only
uv run pytest tests/test_integration.py -v            # Integration tests only

Release Process

# Make your changes
uv version --bump patch
git add .
git commit -m "feat: your changes"
git push

# Create GitHub release (this triggers everything automatically)
gh release create v0.1.11 --generate-notes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

streamlm-0.1.12.tar.gz (161.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

streamlm-0.1.12-py3-none-any.whl (30.5 kB view details)

Uploaded Python 3

File details

Details for the file streamlm-0.1.12.tar.gz.

File metadata

  • Download URL: streamlm-0.1.12.tar.gz
  • Upload date:
  • Size: 161.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for streamlm-0.1.12.tar.gz
Algorithm Hash digest
SHA256 ebdae771ebe85af20d47b344e686a7d1ee84fc5371907034fd1d240297752ccb
MD5 a0242a0db38fc08896cd305e15710578
BLAKE2b-256 cd12bb7cdb75664694a12375e896ac8273f2ca66fe392cfec1a7cf18aeea41bc

See more details on using hashes here.

Provenance

The following attestation bundles were made for streamlm-0.1.12.tar.gz:

Publisher: publish.yml on jeffmylife/streamlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file streamlm-0.1.12-py3-none-any.whl.

File metadata

  • Download URL: streamlm-0.1.12-py3-none-any.whl
  • Upload date:
  • Size: 30.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for streamlm-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 145173e815e6c732cf35acff93e5c525dae093a8a6220e54b51c4f4a54830852
MD5 439b571dd827249935bb5271d570e73a
BLAKE2b-256 86e82fee3d554040745ba08c0b31b8189e9af91ca29d87660e06617722c81496

See more details on using hashes here.

Provenance

The following attestation bundles were made for streamlm-0.1.12-py3-none-any.whl:

Publisher: publish.yml on jeffmylife/streamlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page