Skip to main content

A prompt engineering tool for large language models

Project description

KePrompt

A powerful command-line tool for prompt engineering and AI interaction

KePrompt lets you work with multiple AI providers (OpenAI, Anthropic, Google, and more) using simple prompt files and a unified command-line interface. No Python programming required.

Why KePrompt?

  • One tool, many AIs: Switch between GPT-4, Claude, Gemini, and others with a single command
  • Simple prompt language: Write prompts using an easy-to-learn syntax
  • Comprehensive cost tracking: Automatic SQLite-based tracking of all API usage with detailed reporting
  • Conversation management: Save and resume multi-turn conversations
  • Function calling: Extend prompts with file operations, web requests, and custom functions
  • Web GUI: Modern browser-based interface for interactive prompt development
  • Production ready: Built-in logging, error handling, and debugging tools

Quick Start

0. Prepare Your Working Directory

# Create a new project directory with isolated Python environment
mkdir myproject
cd myproject
python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

1. Install KePrompt

pip install keprompt

To install in developers mode use:

pip install -e ~/keprompt/ 

2. Initialize your workspace

keprompt init

This creates the prompts/ directory, copies a default hello.prompt, installs default functions, initializes the database, and downloads the model registry — all in one command.

Use keprompt init --force to overwrite existing files.

3. Set up your API key

# Add to your .env file or export directly
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# ... or add to ~/.env

4. Run the included hello prompt

keprompt chat new --prompt hello

🎉 You should see the AI's response! The system automatically tracks costs and saves the conversation.

Try it with different models and questions:

keprompt chat new --prompt hello --model anthropic/claude-sonnet-4-20250514
keprompt chat new --prompt hello --set question 'tell me a joke'
keprompt chat new --prompt hello --model deepseek/deepseek-chat --set question 'what is 2+2?'

Your First Real Prompt

Let's create something more useful - a file analyzer:

cat > prompts/analyze.prompt << 'EOF'
.prompt "name":"File Analyzer", "version":"1.0.0", "params":{"model":"gpt-4o", "filename":"file_to_analyze"}
.# Analyze any text file
.llm {"model": "<<model>>"}
.system You are an expert text analyst. Provide clear, actionable insights.
.user Please analyze this file:

.include <<filename>>

Provide a summary, key points, and any recommendations.
.exec
EOF

Run it with a parameter:

keprompt chats create --prompt analyze --set filename "README.md"

Run it with 2 parameters:

keprompt chats create --prompt analyze --set filename "README.md" --set model "openrouter/openai/gpt-oss-20b"

Modern CLI Interface

KePrompt uses an intuitive object-verb command structure:

keprompt <object> <verb> [options]

Core Objects

  • init - Initialize workspace (one-stop setup)
  • prompts - List and manage prompt files
  • chats - Create and manage conversations
  • models - Browse available AI models
  • providers - View AI providers
  • functions - List available functions
  • server - Start/stop web interface
  • database - Manage conversation database

Common Commands

# List available prompts
keprompt prompts get

# Create a new conversation
keprompt chats create --prompt hello --set name "Alice"

# Continue a conversation
keprompt chats reply <chat-id> "Tell me more"

# List conversations with costs
keprompt chats get

# Browse available models
keprompt models get --company OpenAI

# Start web interface
keprompt server start --web-gui

See the Knowledge Engineer's Guide for the full reference.

Web GUI Interface

KePrompt includes a modern web-based interface for interactive development:

# Start server with web GUI
keprompt server start --web-gui

# Specify port (optional)
keprompt server start --web-gui --port 8080

# Development mode with auto-reload
keprompt server start --web-gui --reload

Then open your browser to http://localhost:8080

Features:

  • Interactive chat interface
  • Real-time cost tracking
  • Prompt editor with syntax highlighting
  • Model selection and comparison
  • Function testing and debugging
  • Conversation history browser

Core Concepts

Prompt Files

  • Stored in prompts/ directory with .prompt extension
  • Use simple line-based syntax starting with . for commands
  • Support variables, functions, and multi-turn conversations

The Prompt Language

Command Purpose Example
.prompt REQUIRED - Define prompt metadata .prompt "name":"My Prompt", "version":"1.0.0"
.llm Configure AI model .llm {"model": "gpt-4o"}
.system Set system message .system You are a helpful assistant
.user Add user message .user What is the weather like?
.tool_call Represent an LLM tool call (manual/replay/debug) .tool_call readfile(filename="data.txt") id=call_abc123
.tool_result Represent a tool result (manual/replay/debug) .tool_result id=call_abc123 name=readfile
.exec Send to AI and get response .exec
.cmd Call a function .cmd readfile(filename="data.txt")
.print Output to console .print The result is: <<last_response>>

Representing LLM Tool Calls and Tool Results

KePrompt supports two statement types that let you represent tool calls produced by the LLM API and the corresponding tool responses.

These are most useful for:

  • Replaying/debugging conversations
  • Creating test fixtures / example chats
  • Manually reconstructing a conversation that includes tool use

They are distinct from .cmd, which executes a local function during prompt execution.

.tool_call (LLM → tool)

Syntax

.tool_call function_name(param=value, ...) id=call_id

Example

.tool_call readfile(filename="data.txt") id=call_001

.tool_result (tool → LLM)

Syntax

.tool_result id=call_id name=function_name
<result text can be multi-line>

Example

.tool_result id=call_001 name=readfile
File contents:
Hello from data.txt

End-to-end example (manual reconstruction)

.prompt "name":"Toolcall Example", "version":"1.0.0", "params":{"model":"gpt-4o-mini"}
.user Please read data.txt and summarize it.

# These two statements represent what the *LLM API* would have produced,
# and the corresponding tool response KePrompt would send back:
.tool_call readfile(filename="data.txt") id=call_001
.tool_result id=call_001 name=readfile
Hello from data.txt

.assistant Summary: The file contains a short greeting.

Prompt Metadata (Required)

Every prompt file must start with a .prompt statement that defines metadata:

.prompt "name":"My Prompt Name", "version":"1.0.0", "params":{"model":"gpt-4o-mini"}

Required fields:

  • name: Human-readable prompt name (used in cost tracking)
  • version: Semantic version for tracking changes

Optional fields:

  • params: Default parameters and documentation

Examples:

# Simple prompt
.prompt "name":"Hello World", "version":"1.0.0"

# With parameters
.prompt "name":"Code Reviewer", "version":"2.1.0", "params":{"model":"gpt-4o", "language":"python"}

# Research assistant
.prompt "name":"Research Assistant", "version":"1.5.0", "params":{"model":"claude-3-5-sonnet-20241022", "depth":"comprehensive"}

Variables

Use <<variable>> syntax for substitution:

# In your prompt file
.user Hello <<name>>, today is <<date>>

# Run with parameters
keprompt chats create --prompt greeting --set name "Alice" --set date "Monday"

Built-in Functions

  • readfile(filename) - Read file contents
  • writefile(filename, content) - Write to file (with backup)
  • wwwget(url) - Fetch web content
  • execcmd(cmd) - Execute shell command

Common Workflows

Research Assistant

cat > prompts/research.prompt << 'EOF'
.prompt "name":"Research Assistant", "version":"1.0.0", "params":{"model":"claude-3-5-sonnet-20241022", "topic":"research_topic"}
.llm {"model": "claude-3-5-sonnet-20241022"}
.system You are a research assistant. Provide thorough, well-sourced information.
.user Research this topic: <<topic>>
.cmd wwwget(url="https://en.wikipedia.org/wiki/<<topic>>")
Based on this information, provide a comprehensive overview with key facts and recent developments.
.exec
EOF

keprompt chats create --prompt research --set topic "Artificial_Intelligence"

Code Review

cat > prompts/review.prompt << 'EOF'
.prompt "name":"Code Reviewer", "version":"1.0.0", "params":{"model":"gpt-4o", "codefile":"path/to/file"}
.llm {"model": "gpt-4o"}
.system You are a senior software engineer. Provide constructive code reviews.
.user Please review this code file:

.include <<codefile>>

Focus on: code quality, potential bugs, performance, and best practices.
.exec
EOF

keprompt chats create --prompt review --set codefile "src/main.py"

Interactive Chat Session

# Start a conversation
CHAT_ID=$(keprompt chats create --prompt hello --json | jq -r '.data.chat_id')

# Continue the conversation
keprompt chats reply $CHAT_ID "Can you explain that in more detail?"
keprompt chats reply $CHAT_ID "What about edge cases?"

# View full conversation
keprompt chats get $CHAT_ID

Working with Models

List available models

# See all models
keprompt models get

# Filter by provider
keprompt models get --company OpenAI
keprompt models get --company Anthropic

# Search by name
keprompt models get --name "gpt-4*"
keprompt models get --name "*sonnet*"

Compare costs

# Show pricing for all GPT models
keprompt models get --name "gpt*" --company OpenAI

Update model registry

# Fetch latest models from providers
keprompt models update

Cost Tracking & Analysis

KePrompt automatically tracks all API usage with comprehensive cost analysis.

View Conversation Costs

# List recent conversations with costs
keprompt chats get --limit 20

# View specific chat details
keprompt chats get <chat-id>

# Get cost summary
sqlite3 prompts/chats.db "SELECT SUM(total_cost) FROM chats"

Database Management

# View database info
keprompt database get

# Clean up old conversations
keprompt chats delete --days 30

# Keep only recent conversations
keprompt chats delete --count 100

Custom Functions

KePrompt supports custom functions written in any language. See the detailed guide at ks/creating-keprompt-functions.context.md

Quick Example

Create an executable in prompts/functions/:

#!/usr/bin/env python3
import json, sys

def get_weather(city: str) -> str:
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 72°F"

FUNCTIONS = [
    {
        "name": "get_weather",
        "description": "Get current weather for a city",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"],
            "additionalProperties": False
        }
    }
]

if __name__ == "__main__":
    if sys.argv[1] == "--list-functions":
        print(json.dumps(FUNCTIONS))
    elif sys.argv[1] == "get_weather":
        args = json.loads(sys.stdin.read())
        print(get_weather(**args))

Make it executable:

chmod +x prompts/functions/weather

Use in prompts:

cat > prompts/weather_check.prompt << 'EOF'
.prompt "name":"Weather Check", "version":"1.0.0", "params":{"city":"default_city"}
.llm {"model": "gpt-4o-mini"}
.user What's the weather like in <<city>>? Based on the weather, suggest appropriate clothing.
.exec
EOF

keprompt chats create --prompt weather_check --set city "San Francisco"

For comprehensive documentation on creating custom functions, see ks/creating-keprompt-functions.context.md

Conversation Management

Create and Continue Conversations

# Start a new conversation
keprompt chats create --prompt hello

# Continue with a chat ID
keprompt chats reply a1b2c3d4 "Tell me more about that"

# Show full conversation history
keprompt chats reply a1b2c3d4 --full "Thanks for the explanation"

List and View Conversations

# List all conversations
keprompt chats get

# View specific conversation
keprompt chats get a1b2c3d4

# List with filters
keprompt chats get --limit 10

View Conversation Formats

KePrompt offers multiple viewing formats to help you understand and debug your conversations:

# View conversation messages (default)
keprompt chats get a1b2c3d4 --format=messages

# View prompt source code (statements)
keprompt chats get a1b2c3d4 --format=statements

# View cost summary and metadata
keprompt chats get a1b2c3d4 --format=summary

# View raw JSON data
keprompt chats get a1b2c3d4 --format=raw

Format Aliases (shortcuts for convenience):

  • --format=msg / msgs / message → messages view
  • --format=stmt / stmts / statement → statements view
  • --format=sum → summary view
  • --format=json → raw JSON view

Format Descriptions:

Format Purpose Shows
messages View the conversation User/assistant dialogue with model info
statements View source code Prompt statements (.user, .exec, .set, etc.)
summary View metrics Costs, tokens, API calls, timing
raw View complete data Full JSON with all metadata

Examples:

# Debug which statements were executed
keprompt chats get a1b2c3d4 --format=stmt --pretty

# Quick cost check
keprompt chats get a1b2c3d4 --format=sum --pretty

# Export conversation for analysis
keprompt chats get a1b2c3d4 --format=json > conversation.json

Clean Up

# Delete specific conversation
keprompt chats delete a1b2c3d4

# Delete old conversations
keprompt chats delete --days 30

# Keep only recent conversations
keprompt chats delete --count 100

Server Management

Start Server

# Start with web GUI
keprompt server start --web-gui

# Specify port
keprompt server start --web-gui --port 8080

# Development mode with auto-reload
keprompt server start --web-gui --reload

# Start in specific directory
keprompt server start --web-gui --directory /path/to/project

Manage Servers

# List running servers
keprompt server list --active-only

# Check status
keprompt server status

# Stop server
keprompt server stop

# Stop all servers
keprompt server stop --all

Output Formats

Human-Readable (Default in Terminal)

Rich formatted tables with colors and alignment.

Machine-Readable (JSON)

# Get JSON output
keprompt chats get --json

# Use with jq
keprompt chats get --json | jq '.data[] | select(.total_cost > 0.01)'

# Get chat ID programmatically
CHAT_ID=$(keprompt chats create --prompt hello --json | jq -r '.data.chat_id')

# Extract a value from the JSON output (example: last_response)
# (Requires jq: https://stedolan.github.io/jq/)
keprompt chat new --prompt Test --json | jq -r '.meta.variables.last_response'

# If you don't have jq, you can do the same with python3:
keprompt chat new --prompt Test --json | \
  python3 -c 'import json,sys; d=json.load(sys.stdin); print(d["meta"]["variables"]["last_response"])'

Tips & Best Practices

1. Start Simple

Begin with basic prompts and gradually add complexity.

2. Use the Web GUI for Development

The web interface provides a better development experience with real-time feedback.

keprompt server start --web-gui --reload

3. Manage Costs

  • Use cheaper models for development (gpt-4o-mini, claude-3-haiku)
  • Monitor costs with keprompt chats get
  • Check model pricing with keprompt models get

4. Organize Your Prompts

prompts/
├── research/
│   ├── academic.prompt
│   └── market.prompt
├── coding/
│   ├── review.prompt
│   └── debug.prompt
└── content/
    ├── blog.prompt
    └── social.prompt

5. Version Control

Keep your prompts in git to track what works best.

6. Test Across Models

The same prompt may work differently with different models. Test and compare.

Troubleshooting

Common Issues

"No models found"

keprompt models update

"API key not found"

# Add to .env file
echo 'OPENAI_API_KEY=sk-...' >> .env
# Or export directly
export OPENAI_API_KEY="sk-..."

"Prompt not found"

# List available prompts
keprompt prompts get
# Check prompts directory exists
ls prompts/

"Server already running"

# Check status
keprompt server status
# Stop existing server
keprompt server stop

Getting Help

# Show all options
keprompt --help

# Get help for specific object
keprompt chats --help
keprompt server --help

Documentation

What's Next?

  • Explore the examples in the prompts/ directory
  • Try the web GUI for interactive development
  • Create custom functions for your specific needs
  • Integrate with your workflow using the JSON API

Contributing

KePrompt is open source! Contributions welcome at GitHub.

License

MIT


KePrompt: Making AI interaction simple, powerful, and cost-effective.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keprompt-2.6.0.tar.gz (140.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

keprompt-2.6.0-py3-none-any.whl (157.5 kB view details)

Uploaded Python 3

File details

Details for the file keprompt-2.6.0.tar.gz.

File metadata

  • Download URL: keprompt-2.6.0.tar.gz
  • Upload date:
  • Size: 140.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for keprompt-2.6.0.tar.gz
Algorithm Hash digest
SHA256 f8645bc3fa37eacdf3c1446f659013a1671adcdaa69c9ef6e7e550c852963b3e
MD5 59478a3a61532dd4a64654c8e43677c6
BLAKE2b-256 dd97ec5cc080b93bd6f1a1ee9b2ac143895955c213b8ac2b793b504743eaac4f

See more details on using hashes here.

File details

Details for the file keprompt-2.6.0-py3-none-any.whl.

File metadata

  • Download URL: keprompt-2.6.0-py3-none-any.whl
  • Upload date:
  • Size: 157.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for keprompt-2.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9bb0c0a8f12d8c40c20b7b33058b006c44861fe8c8431a1a6e576f0756f53686
MD5 51c3e8654e8ee84851920a4f6a167c43
BLAKE2b-256 0df1c40df74e83d909ec7d2409cc719367c4fb75bea63ba3e5cbf6727d1c2f2b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page