A terminal-based, scriptable chat client for LLMs (Ollama, OpenAI, Anthropic)
Project description
ScriptChat
A terminal-based, scriptable chat client for interacting with local LLMs (via Ollama) and remote LLMs via OpenAI-compatible providers (OpenAI, DeepSeek...) and Anthropic Claude.
Why ScriptChat?
ScriptChat fills the gap between writing code against LLM APIs and using GUI chat interfaces. APIs give you control but high friction; GUIs are convenient but hard to automate. ScriptChat gives you both: an interactive TUI for exploration, and scriptable automation for testing and iteration.
- Scriptable: Write
.scscripts with variables and assertions. Pipe into shell workflows. Run prompt regression tests in CI. - Branchable: Save a conversation, branch it, try different approaches, compare results.
- Multi-provider: Same workflow across Ollama, OpenAI, Anthropic. Switch models mid-conversation.
- File-based: Conversations are directories of text files. Inspect, edit, or version control them.
For developers and power users who want their LLM interactions to be reproducible, scriptable, and under their control.
Features
- Full-screen terminal UI with conversation history, status bar, and input pane
- Extended thinking support for reasoning models (
/reason,/thinking) - File references: register files and include them in prompts (
/file,@path) - Export conversations to Markdown, JSON, or HTML
- Multi-line message support
- Token usage tracking and temperature control
- System prompts per conversation
Requirements
- Python 3.10+
- Optional: Ollama installed and accessible in your PATH
Installation
Via pipx (recommended)
pipx install scriptchat
This installs the scriptchat command (and the shorter sc alias) globally.
From source
-
Clone this repository:
git clone https://github.com/quasientio/scriptchat.git cd scriptchat
-
Install in development mode:
pip install -e ".[dev]"
To include DeepSeek tokenizer support (for exact token counts):
pip install -e ".[dev,deepseek]"
Quickstart
Step 1: Run interactive setup (one-time)
scriptchat --init
Step 2: Test it immediately with a real example
# Download and run a working script
curl -s https://raw.githubusercontent.com/quasientio/scriptchat/main/examples/quickstart.sc | sc
Step 3: Start chatting
sc
Step 4: Explore more examples See the Examples Gallery →
Configuration
Edit ~/.scriptchat/config.toml directly. See config.toml.example for the full specification.
Key options:
default_model- Model on startup inprovider/modelformat (e.g.,ollama/llama3.2)default_temperature- Temperature for new conversations (0.0-2.0)system_prompt- Default system prompt (override with/prompt)[[providers]]- Provider configs withid,type,api_url,models
Model aliases: Models can have an optional alias for shorter /model commands. Useful for providers with long model names:
models = [
{ name = "accounts/fireworks/models/deepseek-v3", alias = "dsv3" }
]
Then use /model dsv3 instead of the full path. Aliases must be unique and contain only alphanumeric, underscore, dash, or dot characters.
Thinking models: Models like Kimi K2 Thinking or DeepSeek R1 use internal reasoning tokens. Set max_tokens high enough to allow room for both thinking and output. For models that support reasoning control (like Kimi on Fireworks), add reasoning_levels to enable the /reason command:
models = [
{ name = "accounts/fireworks/models/kimi-k2-thinking", alias = "kimi", max_tokens = 16384, reasoning_levels = ["low", "medium", "high"] }
]
Streaming is auto-enabled when max_tokens > 4096 (required by some providers).
Prompt caching (Fireworks): Set prompt_cache = false at provider level to disable prompt caching for privacy. Some models (like Kimi) don't support the prompt_cache_max_len parameter - use skip_prompt_cache_param = true for those:
[[providers]]
id = "fireworks"
prompt_cache = false # Disable caching for privacy
models = [
{ name = "accounts/fireworks/models/deepseek-v3", alias = "dsv3" },
{ name = "accounts/fireworks/models/kimi-k2-thinking", alias = "kimi", skip_prompt_cache_param = true }
]
API Keys: Set api_key in config or use environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.).
Minimal example:
[general]
default_model = "ollama/llama3"
[[providers]]
id = "ollama"
type = "ollama"
api_url = "http://localhost:11434/api"
models = "llama3,phi3"
Usage
Batch Mode
Run a script file:
scriptchat --run script.sc
Use --continue-on-error to run all assertions even if some fail (exit code is still 1 if any fail):
scriptchat --run tests/prompt-tests.sc --continue-on-error
CLI Flags
| Flag | Description |
|---|---|
--init |
Interactive configuration setup |
--run FILE |
Run a script file in batch mode |
--continue-on-error |
Don't stop on assertion failures |
--config PATH |
Use config file at PATH instead of ~/.scriptchat/config.toml |
--version |
Show version |
Exit Codes
| Code | Meaning |
|---|---|
0 |
Success (all assertions passed) |
1 |
Failure (assertion failed, config error, or runtime error) |
Commands
All commands start with /:
Conversation
/new- Start a new conversation/save- Save the current conversation/open [--archived] [name]- Open a saved conversation. Use--archivedto open from archive. Without args, shows interactive selection menu./branch- Create a branch (copy) of the current conversation/rename- Rename a saved conversation (renames its directory)/chats [--archived|--all]- List saved conversations. Use--archivedfor archived only,--allfor both./archive [index|name|range] [--tag key=value]- Archive conversations by index (3), name, range (1-5), or tag filter/unarchive [index|name|range] [--tag key=value]- Restore archived conversations (same syntax as/archive)/del [index]- Delete the current conversation or delete a saved conversation by index (requires confirmation)/clear [index]- Clear all messages from a conversation while keeping metadata (requires saved conversation)
Export/Import
/export [format]- Export the current conversation (formats:md,json,html; prompts if omitted).jsonincludes full metadata;md/htmlare minimal, human-friendly transcripts./export-all [format]- Export all saved conversations in the given format./import <path>- Import a conversation exported asmdorjsoninto the conversations folder/import-chatgpt <path> [--dry-run]- Import conversations from a ChatGPT export ZIP file. Use--dry-runto preview without saving.
Model & Settings
/model [provider/name|alias]- Switch model. Without args, shows interactive selection menu. With args, switches directly (e.g.,/model ollama/llama3or/model dsv3if alias configured)./models- List all configured models by provider (shows aliases, context, reasoning levels)/temp- Change the temperature setting/reason [level]- Set reasoning level (low,medium,high,max). Without args, shows interactive selection menu. For Anthropic Claude, these map to thinking budgets (4K, 16K, 32K, 55K tokens)./thinking [tokens]- Set exact thinking budget in tokens for Anthropic Claude (1024-128000). Use/thinking offto disable. Overrides/reasonpresets./think-history [on|off]- Toggle whether thinking content is included in conversation history sent to API. Without args, shows current status. Default isoff(thinking displayed but not sent back)./timeout <seconds|0|off>- Set the request timeout in seconds, or disable with0oroff/stream [on|off]- Toggle or set streaming of assistant responses/prompt [text|clear]- Set or clear the system prompt for this conversation (prompts if omitted)
Files
/file [--force] <path>- Register a file for use as@pathin messages (content is expanded when sending, message stores@path). Use--forcefor large files abovefile_confirm_threshold_bytes./unfile <key>- Unregister a file (removes both the path and basename aliases)/files [--long]- List registered files (with sizes and hashes when using--long)
Tags
/tag key=value- Apply metadata tags to the conversation (shown in/chatsand/open)/untag <key>- Remove a metadata tag from the conversation/tags- List tags on the current conversation
Messaging
/send <message>- Queue a message (sends immediately if the model is idle)/history [n|all]- Show recent user messages in current conversation (persists if saved/opened; default: last 10)/search <pattern>- Search the current conversation for text/regex matches. Shows results in a selection menu with context snippets. Navigate with arrow keys and press Enter to jump to a message./note <text>- Add a note to the conversation (saved and visible, but not sent to model)/undo [n]- Remove the last user/assistant exchange(s) from the conversation. Without n, it removes 1./retry- Drop the last assistant message and resend the previous user message
Testing & Debug
/assert <pattern>,/assert-not <pattern>- Assert the last response contains (or doesn't contain) a text/regex pattern. Case-insensitive. Exits with error in batch mode if assertion fails./echo <text>- Print a message without sending to model/log-level <level>- Adjust logging verbosity (debug/info/warn/error/critical)/profile [--full]- Show current settings and registered files
Scripting
/run <path>- Execute a script file (one command/message per line; lines starting with#are comments)/sleep <seconds>- Pause execution for the specified duration (scripts/batch mode only)/set <name>=<value>- Define a script variable for use with${name}syntax. Omit value (/set var=) to unset./unset <name>- Remove a variable/vars- List all defined variables
System
/help [command|keyword]- Show help for all commands, a specific command, or search by keyword./keys- Show keyboard shortcuts/exit- Exit ScriptChat
Multi-line Messages
To enter a multi-line message:
- Type
"""and press Enter - Enter your message across multiple lines
- Type
"""on a new line to send
This syntax also works in script files (/run or --run):
"""
Analyze this code for:
- Security issues
- Performance problems
"""
Script Variables
Use /set to define variables that can be referenced with ${name} syntax:
/set model=llama3
/model ${model}
/set greeting=Hello, how are you?
${greeting}
Variables are expanded in both commands and messages. Variable names must start with a letter or underscore and contain only letters, numbers, and underscores.
Unsetting variables: Use /unset or /set var= (empty value) to remove a variable:
/set temp=testing
/unset temp # Remove variable
/set foo=bar
/set foo= # Also removes variable
Environment variable fallback: If a variable isn't defined via /set, ScriptChat checks environment variables. This enables parameterized scripts:
# Run with different configurations
LANGUAGE=Python TOPIC="error handling" scriptchat --run test.sc
LANGUAGE=Rust TOPIC="memory safety" scriptchat --run test.sc
Script variables (/set) take precedence over environment variables. Unknown variables are left unexpanded.
Security: Sensitive environment variables matching patterns like *_KEY, *_SECRET, *_TOKEN, *_PASSWORD are blocked from expansion by default. Configure in config.toml:
[general]
# Disable env var expansion entirely
env_expand_from_environment = false
# Or override the default blocklist with your own patterns
env_var_blocklist = ["MY_PRIVATE_*", "INTERNAL_*"]
# Use empty list to allow all env vars (no blocklist)
env_var_blocklist = []
Thinking/Reasoning Models
ScriptChat supports thinking/reasoning models like Kimi, DeepSeek R1, and Claude with extended thinking. The thinking content is:
- Displayed in the UI with
<thinking>tags (in gray) - Saved to conversation files (
NNNN_llm_thinking.txt) - Not sent back in conversation history by default (reduces context usage)
To include thinking content in messages sent to the API:
[general]
include_thinking_in_history = true
Keyboard Shortcuts
| Key | Action |
|---|---|
Ctrl+Up |
Focus conversation pane |
Ctrl+Home/End |
Jump to start/end of conversation |
Up/Down |
Scroll (conversation) or history (input) |
Tab |
Complete commands, paths, and registered keys (bash-like behavior) |
Escape |
Clear input or return to input pane |
Escape ×2 |
Cancel ongoing inference |
Ctrl+C/D |
Exit |
Use /keys for the full list.
File References
/file [--force] <path> registers a file for use in messages. Include @path in any user message to inline the file contents when sending (the stored message keeps @path for readability). Examples:
- Register:
/file docs/plan.md - Send with inline file:
Summarize @docs/plan.md and list action items.(you can also use@{docs/plan.md}or@plan.mdif unique) - If an
@pathisn't registered, the send will error and nothing is sent.
You can register multiple files and mix references in one message. /profile lists full paths of registered files.
Nested references: If a registered file contains @path references to other registered files, those are also expanded (one level deep).
Folder References
/folder [--force] <path> registers all files in a folder for use in messages. When you reference a folder with @folder-name, it expands to all files with XML tags:
<file path="/path/to/file1.txt">
content of file1
</file>
<file path="/path/to/file2.txt">
content of file2
</file>
Examples:
- Register:
/folder src/components - Send with inline folder:
Review the code in @components and suggest improvements. - Individual files in the folder are still accessible:
@file1.txtor@{/path/to/file1.txt} - Unregister:
/unfolder src/components(removes the folder and all its files)
Note: /folder registers files non-recursively (only files directly in the folder, not subdirectories). Use --force to include files larger than the configured threshold.
Token Estimation
When registering files with /file, ScriptChat shows the token count and context percentage:
Registered @README.md (14234 chars, 3548 tokens / 2.7% ctx)
Token counting accuracy varies by provider:
| Model/Provider | Method | Accuracy |
|---|---|---|
| DeepSeek (any provider) | transformers | exact |
| OpenAI (gpt-3/gpt-4/o1/o3) | tiktoken | exact |
| Anthropic | tiktoken cl100k_base | ~approximate |
| Ollama | tiktoken cl100k_base | ~approximate |
| Other openai-compatible | tiktoken cl100k_base | ~approximate |
Approximate counts are prefixed with ~. For exact DeepSeek tokenization, install the optional dependency:
pip install scriptchat[deepseek] # or: pipx inject scriptchat transformers
Conversation Storage
Conversations are stored in ~/.scriptchat/conversations/ (or conversations_dir in config) with the following structure:
- Each conversation is a directory:
YYYYMMDDHHMM_modelname_savename/ - Messages are stored as individual files:
0001_user.txt,0002_llm.txt, etc. - Metadata is stored in
meta.json
You can manually edit message files or delete them as needed.
Exports (/export) go to the current working directory by default, or to exports_dir if configured.
Example Workflow
- Start ScriptChat:
scriptchatorsc - Chat with the default model
- Save your conversation:
/savethen enter a name - Switch models:
/modelthen select a model - Continue chatting with the new model
- Rename or branch to organize:
/rename new-nameor/branch - Open a previous conversation:
/open - Exit when done:
/exitor Ctrl+C
Examples Gallery
Click to expand example scripts
Quickstart (batch)
/model ollama/llama3.2
What is 2+2?
/assert 4
CI Prompt Testing (batch)
/temp 0.1
/prompt You are a math tutor. Give clear, concise explanations.
Explain the Pythagorean theorem in one sentence.
/assert theorem|triangle|hypotenuse
/assert-not calculus|derivative
Security Audit (batch)
# FILE=app.py scriptchat --run examples/security-audit.sc
/file ${FILE}
"""
Review @${FILE} for security vulnerabilities:
- Hardcoded secrets, SQL injection, XSS, command injection...
"""
/assert-not admin123|sk-1234567890
Prompt Engineering (interactive)
What are the key principles of REST API design?
/save rest-api-baseline
/branch rest-api-detailed
/prompt You are an expert API architect. Be concise and technical.
/retry
Code Review (interactive)
/file examples/files/src/api/routes.py
/file examples/files/tests/test_routes.py
"""
Review @routes.py for security vulnerabilities, focusing on:
- Input validation, authentication, SQL injection...
"""
Research with Reasoning (interactive)
/model anthropic/claude-sonnet-4-20250514
/reason high
/timeout off
"""
Analyze the trade-offs between microservices and monolithic architecture...
"""
See the examples/ folder for full scripts and documentation.
Status Bar
The status bar shows:
- Provider and model (e.g.,
ollama/llama3.2) with optional reasoning level in parentheses - Token usage (input/output), with optional context usage percentage
- Conversation ID (or
<unsaved>for new conversations) - Thinking indicator when the model is processing
Example: ollama/llama3.2 (high) | 1234 in / 567 out | 1801/8192 (22.0%) | 202511180945_llama32_my-chat
Troubleshooting
"Configuration file not found": Create ~/.scriptchat/config.toml from the example, or run scriptchat --init.
"Ollama is not running": Start Ollama with ollama serve before using ScriptChat with an Ollama provider.
Connection errors: Check that Ollama is running (ollama serve) and accessible at the configured URL.
Model not found: Make sure the model is pulled in Ollama (ollama pull modelname) and configured in config.toml.
Empty or truncated responses from thinking models: Thinking models (Kimi K2, DeepSeek R1, etc.) use tokens for internal reasoning. If the default max_tokens limit is too low, the model may exhaust tokens during thinking before producing output. Set max_tokens in your model config (e.g., max_tokens = 16384).
License
Apache License 2.0 (see LICENSE). Attribution details are in NOTICE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scriptchat-0.5.0.tar.gz.
File metadata
- Download URL: scriptchat-0.5.0.tar.gz
- Upload date:
- Size: 2.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6a5ca71acec679cddd0cb18f20ea53632b2b63288093161eb69f32f55fa1824
|
|
| MD5 |
f6d4b46c7de6a63c2e95e2cff05a592e
|
|
| BLAKE2b-256 |
d0ab7000cb4330ee8107cc0f446d2d089f9b86cc5af788e2165095728c991429
|
File details
Details for the file scriptchat-0.5.0-py3-none-any.whl.
File metadata
- Download URL: scriptchat-0.5.0-py3-none-any.whl
- Upload date:
- Size: 2.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b92f67d8906a6c6dcd1aeeb42f53166c7e15dc7820492ac93973c4fdc7eb45f6
|
|
| MD5 |
4e678fd91e1ed3eb9e68c41513051c4d
|
|
| BLAKE2b-256 |
6f1655b88146b2c203d487dddae1bda03d12a6ce0948bb980f09838859ca0896
|