A prompt engineering tool for large language models
Project description
KePrompt
A powerful command-line tool for prompt engineering and AI interaction
KePrompt lets you work with multiple AI providers (OpenAI, Anthropic, Google, and more) using simple prompt files and a unified command-line interface. No Python programming required.
Why KePrompt?
- One tool, many AIs: Switch between GPT-4, Claude, Gemini, and others with a single command
- Simple prompt language: Write prompts using an easy-to-learn syntax
- Cost tracking: Monitor token usage and costs across all providers
- Conversation management: Save and resume multi-turn conversations
- Function calling: Extend prompts with file operations, web requests, and custom functions
- Production ready: Built-in logging, error handling, and debugging tools
Quick Start
0. Prepare Your Working Directory
# Create a new project directory, install private python environment, etc.
mkdir myproject
cd myproject
python3 -m venv .venv
activate .venv
1. Install KePrompt
pip install keprompt
2. Initialize your workspace
keprompt --init
This creates the prompts/ directory and installs built-in functions.
3. Set up your API key
keprompt -k
Choose your AI provider and enter your API key (stored securely in your system keyring).
4. Create your first prompt
cat > prompts/hello.prompt << 'EOF'
.# My first keprompt file
.llm {"model": "gpt-4o-mini"}
.system You are a helpful assistant.
.user Hello! Please introduce yourself and explain what you can help with.
.exec
EOF
5. Run your prompt
keprompt -e hello --debug
🎉 You should see the AI's response! The --debug flag shows detailed execution information.
Your First Real Prompt
Let's create something more useful - a file analyzer:
cat > prompts/analyze.prompt << 'EOF'
.# Analyze any text file
.llm {"model": "gpt-4o"}
.system You are a expert text analyst. Provide clear, actionable insights.
.user Please analyze this file:
===
.include <<filename>>
===
Provide a summary, key points, and any recommendations.
.exec
EOF
Run it with a parameter:
keprompt -e analyze --param filename "README.md" --debug
Core Concepts
Prompt Files
- Stored in
prompts/directory with.promptextension - Use simple line-based syntax starting with
.for commands - Support variables, functions, and multi-turn conversations
The Prompt Language
| Command | Purpose | Example |
|---|---|---|
.llm |
Configure AI model | .llm {"model": "gpt-4o"} |
.system |
Set system message | .system You are a helpful assistant |
.user |
Add user message | .user What is the weather like? |
.exec |
Send to AI and get response | .exec |
.cmd |
Call a function | .cmd readfile(filename="data.txt") |
.print |
Output to console | .print The result is: <<last_response>> |
Variables
Use <<variable>> syntax for substitution:
# In your prompt file
.user Hello <<name>>, today is <<date>>
# Run with parameters
keprompt -e greeting --param name "Alice" --param date "Monday"
Built-in Functions
readfile(filename)- Read file contentswritefile(filename, content)- Write to file (with backup)wwwget(url)- Fetch web contentaskuser(question)- Prompt user for inputexeccmd(cmd)- Execute shell command
Common Workflows
Research Assistant
cat > prompts/research.prompt << 'EOF'
.llm {"model": "claude-3-5-sonnet-20241022"}
.system You are a research assistant. Provide thorough, well-sourced information.
.user Research this topic: <<topic>>
.cmd wwwget(url="https://en.wikipedia.org/wiki/<<topic>>")
Based on this information, provide a comprehensive overview with key facts and recent developments.
.exec
EOF
keprompt -e research --param topic "Artificial_Intelligence"
Code Review
cat > prompts/review.prompt << 'EOF'
.llm {"model": "gpt-4o"}
.system You are a senior software engineer. Provide constructive code reviews.
.user Please review this code file:
.include <<codefile>>
Focus on: code quality, potential bugs, performance, and best practices.
.exec
EOF
keprompt -e review --param codefile "src/main.py"
Content Generation
cat > prompts/blog.prompt << 'EOF'
.llm {"model": "gpt-4o"}
.system You are a professional content writer.
.user Write the file named "blog_<<topic>.md with:
a blog post about: <<topic>>
Target audience: <<audience>>
Tone: <<tone>>
Length: approximately <<length>> words
.exec
EOF
keprompt -e blog --param topic "AI_Tools" --param audience "developers" --param tone "informative" --param length "800"
Working with Models
List available models
# See all models
keprompt -m
# Filter by provider
keprompt -m --company openai
keprompt -m --company anthropic
# Search by name
keprompt -m gpt-4
keprompt -m "*sonnet*"
Compare costs
# Show pricing for all GPT models
keprompt -m gpt --company openai
Conversation Management
Start a conversation
keprompt -e chat --conversation my_session --debug
Continue a conversation
keprompt --conversation my_session --answer "Tell me more about the second point"
Resume with logging
keprompt --conversation my_session --answer "Can you provide examples?" --debug
Custom Functions
Create executable functions in any language:
# Create a custom function
cat > prompts/functions/weather << 'EOF'
#!/usr/bin/env python3
import json, sys, requests
def get_schema():
return [{
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
}]
if sys.argv[1] == "--list-functions":
print(json.dumps(get_schema()))
elif sys.argv[1] == "get_weather":
args = json.loads(sys.stdin.read())
# Your weather API logic here
print(f"Weather in {args['city']}: Sunny, 72°F")
EOF
chmod +x prompts/functions/weather
Use in prompts:
cat > prompts/weather_check.prompt << 'EOF'
.llm {"model": "gpt-4o-mini"}
.user Use defined functions to describe what's the weather like in <<city>>?
Based on this weather, suggest appropriate clothing.
.exec
EOF
keprompt -e weather_check --param city "San Francisco" --debug
Command Reference
| Command | Description |
|---|---|
keprompt -e <name> |
Execute prompt file |
keprompt -p |
List available prompts |
keprompt -m |
List available models |
keprompt -f |
List available functions |
keprompt -k |
Add/update API keys |
keprompt --debug |
Enable detailed logging |
keprompt --conversation <name> |
Manage conversations |
keprompt --param key value |
Set variables |
Tips & Best Practices
1. Start Simple
Begin with basic prompts and gradually add complexity.
2. Use Debug Mode
Always use --debug when developing prompts to see what's happening.
3. Manage Costs
- Use cheaper models for development (
gpt-4o-mini,claude-3-haiku) - Monitor token usage with the debug output
- Check model pricing with
keprompt -m
4. Organize Your Prompts
prompts/
├── research/
│ ├── academic.prompt
│ └── market.prompt
├── coding/
│ ├── review.prompt
│ └── debug.prompt
└── content/
├── blog.prompt
└── social.prompt
5. Version Control
Keep your prompts in git to track what works best.
6. Test Across Models
The same prompt may work differently with different models. Test and compare.
Troubleshooting
Common Issues
"No models found"
- Run
keprompt --initto set up the workspace - Check your internet connection for model updates
"API key not found"
- Run
keprompt -kto add your API key - Ensure you have credits/access with your AI provider
"Function not found"
- Run
keprompt -fto see available functions - Check that custom functions are executable (
chmod +x)
"Prompt file not found"
- Ensure files are in
prompts/directory with.promptextension - Use
keprompt -pto list available prompts
Getting Help
# Show all options
keprompt --help
# List statement types
keprompt -s
# Show prompt content
keprompt -l <promptname>
# Debug a prompt
keprompt -e <promptname> --debug
What's Next?
- Explore the examples in the
prompts/directory - Create custom functions for your specific needs
- Set up conversations for complex multi-turn interactions
- Integrate with your workflow using shell scripts or CI/CD
Contributing
KePrompt is open source! Contributions welcome at GitHub.
License
KePrompt: Making AI interaction simple, powerful, and cost-effective.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file keprompt-1.3.0.tar.gz.
File metadata
- Download URL: keprompt-1.3.0.tar.gz
- Upload date:
- Size: 63.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9e3adb754df0a21171eb4076bf520631aca7d42415a10339bc1912d601e3d52b
|
|
| MD5 |
c905cb8ae1b427842b10cc36f74e8819
|
|
| BLAKE2b-256 |
f18052ae10a6f733c0587f18a3e12a1192017028dd828cc79374ec255ba0e738
|
File details
Details for the file keprompt-1.3.0-py3-none-any.whl.
File metadata
- Download URL: keprompt-1.3.0-py3-none-any.whl
- Upload date:
- Size: 71.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6545a7b8692341ae237cf8785ff05cc8f4e7b1671a59bc1464b80271506e58d
|
|
| MD5 |
29ca318a4df143551ca195ee575956be
|
|
| BLAKE2b-256 |
75e8bec0b0e2f405401d05a13a5ff99da21202392677153db276f915f384352b
|