PromptLab is a free, lightweight, open-source experimentation tool for Gen AI applications.
Project description
📋 Table of Contents
- Overview
- Features
- Installation
- Quick Start
- Core Concepts
- Supported Models
- Examples
- Documentation
- Articles & Tutorials
- Contributing
- License
Overview 🔍
PromptLab is a free, lightweight, open-source experimentation tool for Gen AI applications. It streamlines prompt engineering, making it easy to set up experiments, evaluate prompts, and track them in production - all without requiring any cloud services or complex infrastructure.
With PromptLab, you can:
- Create and manage prompt templates with versioning
- Build and maintain evaluation datasets
- Run experiments with different models and prompts
- Evaluate model performance using built-in and custom metrics
- Compare experiment results side-by-side
- Deploy optimized prompts to production
Features ✨
- Truly Lightweight: No cloud subscription, no additional servers, not even Docker - just a simple Python package
- Easy to Adopt: No ML or Data Science expertise required
- Self-contained: No need for additional cloud services for tracking or collaboration
- Seamless Integration: Works within your existing web, mobile, or backend project
- Flexible Evaluation: Use built-in metrics or bring your own custom evaluators
- Visual Studio: Compare experiments and track assets through a local web interface
- Multiple Model Support: Works with Azure OpenAI, Ollama, DeepSeek and more
- Version Control: Automatic versioning of all assets for reproducibility
- Async Support: Run experiments and invoke models asynchronously for improved performance
Installation 📦
pip install promptlab
It's recommended to use a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install promptlab
Quick Start 🚀
from promptlab import PromptLab
from promptlab.types import PromptTemplate, Dataset
# Initialize PromptLab with SQLite storage
tracer_config = {
"type": "sqlite",
"db_file": "./promptlab.db"
}
pl = PromptLab(tracer_config)
# Create a prompt template
prompt_template = PromptTemplate(
name="essay_feedback",
description="A prompt for generating feedback on essays",
system_prompt="You are a helpful assistant who can provide feedback on essays.",
user_prompt="The essay topic is - <essay_topic>.\n\nThe submitted essay is - <essay>\nNow write feedback on this essay."
)
pt = pl.asset.create(prompt_template)
# Create a dataset
dataset = Dataset(
name="essay_samples",
description="Sample essays for evaluation",
file_path="./essays.jsonl"
)
ds = pl.asset.create(dataset)
# Run an experiment
experiment_config = {
"model": {
"type": "ollama",
"inference_model_deployment": "llama2",
"embedding_model_deployment": "llama2"
},
"prompt_template": {
"name": pt.name,
"version": pt.version
},
"dataset": {
"name": ds.name,
"version": ds.version
},
"evaluation": [
{
"type": "custom",
"metric": "length",
"column_mapping": {
"response": "$inference"
}
}
]
}
pl.experiment.run(experiment_config)
# Start the PromptLab Studio to view results
pl.studio.start(8000)
Async Support
PromptLab also supports asynchronous operations:
import asyncio
from promptlab import PromptLab
async def main():
# Initialize PromptLab
tracer_config = {
"type": "sqlite",
"db_file": "./promptlab.db"
}
pl = PromptLab(tracer_config)
# Run experiment asynchronously
await pl.experiment.run_async(experiment_config)
# Start the PromptLab Studio asynchronously
await pl.studio.start_async(8000)
# Run the async main function
asyncio.run(main())
Core Concepts 🧩
Tracer
Storage that maintains assets and experiments. Currently uses SQLite for simplicity and portability.
Assets
Immutable artifacts used in experiments, with automatic versioning:
- Prompt Templates: Prompts with optional placeholders for dynamic content
- Datasets: JSONL files containing evaluation data
Experiments
Evaluate prompts against datasets using specified models and metrics.
PromptLab Studio
A local web interface for visualizing experiments and comparing results.
Supported Models 🤖
- Azure OpenAI: Connect to Azure-hosted OpenAI models
- Ollama: Run experiments with locally-hosted models
- OpenRouter: Access a wide range of AI models (OpenAI, Anthropic, DeepSeek, Mistral, etc.) via OpenRouter API
- Custom Models: Integrate your own model implementations
Examples 📚
-
Quickstart: Getting started with PromptLab
-
Asset versioning: Versioning Prompts and Datasets
-
Custom Metric: Creating custom evaluation metrics
-
Async Example: Using async functionality with Ollama and OpenRouter models for improved performance
-
Custom Model: Bring your own model for evaluation
Documentation 📖
For comprehensive documentation, visit our Documentation Page.
Articles & Tutorials 📝
- Evaluating prompts locally with Ollama and PromptLab
- Creating custom prompt evaluation metrics with PromptLab
CI/CD 🔄
PromptLab uses GitHub Actions for continuous integration and testing:
- Unit Tests: Run unit tests for all components of PromptLab
- Integration Tests: Run integration tests that test the interaction between components
- Performance Tests: Run performance tests to ensure performance requirements are met
The tests are organized into the following directories:
tests/unit/: Unit tests for individual componentstests/integration/: Tests that involve multiple components working togethertests/performance/: Tests that measure performancetests/fixtures/: Common test fixtures and utilities
You can find more information about the CI/CD workflows in the .github/workflows directory.
Contributing 👥
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License 📄
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptlab-0.0.7.tar.gz.
File metadata
- Download URL: promptlab-0.0.7.tar.gz
- Upload date:
- Size: 34.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f967428b2121deb36986068387858f76e21cce94952f2089d3c343ebcb0f764e
|
|
| MD5 |
17c60cc4beb15d15d133ca71e03cce71
|
|
| BLAKE2b-256 |
f2cd59bf44b27c555c324feeb9f20d87e675734ddecfc3990a7ceb157ad4ebd2
|
File details
Details for the file promptlab-0.0.7-py3-none-any.whl.
File metadata
- Download URL: promptlab-0.0.7-py3-none-any.whl
- Upload date:
- Size: 37.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c9831019c1125021f290442080bc1ed01c8a3e3e06764d727bf3da4ad897dfc
|
|
| MD5 |
ac34c731ef3c56ed3648347f2b6223d2
|
|
| BLAKE2b-256 |
bbb19985e5bc302bd3af40de1a336346ce8942fd6699fa931487cdd7d473b98d
|