Skip to main content

PromptLab is a free, lightweight, open-source experimentation tool for Gen AI applications.

Project description

logo

PromptLab

A free, lightweight, open-source experimentation tool for Gen AI applications

PyPI Version License GitHub Stars

📋 Table of Contents

Overview 🔍

PromptLab is a free, lightweight, open-source experimentation tool for Gen AI applications. It streamlines prompt engineering, making it easy to set up experiments, evaluate prompts, and track them in production - all without requiring any cloud services or complex infrastructure.

With PromptLab, you can:

  • Create and manage prompt templates with versioning
  • Build and maintain evaluation datasets
  • Run experiments with different models and prompts
  • Evaluate model performance using built-in and custom metrics
  • Compare experiment results side-by-side
  • Deploy optimized prompts to production
PromptLab Studio

Features ✨

  • Truly Lightweight: No cloud subscription, no additional servers, not even Docker - just a simple Python package
  • Easy to Adopt: No ML or Data Science expertise required
  • Self-contained: No need for additional cloud services for tracking or collaboration
  • Seamless Integration: Works within your existing web, mobile, or backend project
  • Flexible Evaluation: Use built-in metrics or bring your own custom evaluators
  • Visual Studio: Compare experiments and track assets through a local web interface
  • Multiple Model Support: Works with Azure OpenAI, Ollama, DeepSeek and more
  • Version Control: Automatic versioning of all assets for reproducibility
  • Async Support: Run experiments and invoke models asynchronously for improved performance

Installation 📦

pip install promptlab

It's recommended to use a virtual environment:

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install promptlab

Quick Start 🚀

from promptlab import PromptLab
from promptlab.types import PromptTemplate, Dataset

# Initialize PromptLab with SQLite storage
tracer_config = {
    "type": "sqlite",
    "db_file": "./promptlab.db"
}
pl = PromptLab(tracer_config)

# Create a prompt template
prompt_template = PromptTemplate(
    name="essay_feedback",
    description="A prompt for generating feedback on essays",
    system_prompt="You are a helpful assistant who can provide feedback on essays.",
    user_prompt="The essay topic is - <essay_topic>.\n\nThe submitted essay is - <essay>\nNow write feedback on this essay."
)
pt = pl.asset.create(prompt_template)

# Create a dataset
dataset = Dataset(
    name="essay_samples",
    description="Sample essays for evaluation",
    file_path="./essays.jsonl"
)
ds = pl.asset.create(dataset)

# Run an experiment
experiment_config = {
    "model": {
        "type": "ollama",
        "inference_model_deployment": "llama2",
        "embedding_model_deployment": "llama2"
    },
    "prompt_template": {
        "name": pt.name,
        "version": pt.version
    },
    "dataset": {
        "name": ds.name,
        "version": ds.version
    },
    "evaluation": [
        {
            "type": "custom",
            "metric": "length",
            "column_mapping": {
                "response": "$inference"
            }
        }
    ]
}
pl.experiment.run(experiment_config)

# Start the PromptLab Studio to view results
pl.studio.start(8000)

Async Support

PromptLab also supports asynchronous operations:

import asyncio
from promptlab import PromptLab

async def main():
    # Initialize PromptLab
    tracer_config = {
        "type": "sqlite",
        "db_file": "./promptlab.db"
    }
    pl = PromptLab(tracer_config)

    # Run experiment asynchronously
    await pl.experiment.run_async(experiment_config)

    # Start the PromptLab Studio asynchronously
    await pl.studio.start_async(8000)

# Run the async main function
asyncio.run(main())

Core Concepts 🧩

Tracer

Storage that maintains assets and experiments. Currently uses SQLite for simplicity and portability.

Assets

Immutable artifacts used in experiments, with automatic versioning:

  • Prompt Templates: Prompts with optional placeholders for dynamic content
  • Datasets: JSONL files containing evaluation data

Experiments

Evaluate prompts against datasets using specified models and metrics.

PromptLab Studio

A local web interface for visualizing experiments and comparing results.

Supported Models 🤖

  • Azure OpenAI: Connect to Azure-hosted OpenAI models
  • Ollama: Run experiments with locally-hosted models
  • OpenRouter: Access a wide range of AI models (OpenAI, Anthropic, DeepSeek, Mistral, etc.) via OpenRouter API
  • Custom Models: Integrate your own model implementations

Examples 📚

Documentation 📖

For comprehensive documentation, visit our Documentation Page.

Articles & Tutorials 📝

CI/CD 🔄

PromptLab uses GitHub Actions for continuous integration and testing:

  • Unit Tests: Run unit tests for all components of PromptLab
  • Integration Tests: Run integration tests that test the interaction between components
  • Performance Tests: Run performance tests to ensure performance requirements are met

The tests are organized into the following directories:

  • tests/unit/: Unit tests for individual components
  • tests/integration/: Tests that involve multiple components working together
  • tests/performance/: Tests that measure performance
  • tests/fixtures/: Common test fixtures and utilities

You can find more information about the CI/CD workflows in the .github/workflows directory.

Contributing 👥

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License 📄

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptlab-0.0.7.tar.gz (34.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptlab-0.0.7-py3-none-any.whl (37.7 kB view details)

Uploaded Python 3

File details

Details for the file promptlab-0.0.7.tar.gz.

File metadata

  • Download URL: promptlab-0.0.7.tar.gz
  • Upload date:
  • Size: 34.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for promptlab-0.0.7.tar.gz
Algorithm Hash digest
SHA256 f967428b2121deb36986068387858f76e21cce94952f2089d3c343ebcb0f764e
MD5 17c60cc4beb15d15d133ca71e03cce71
BLAKE2b-256 f2cd59bf44b27c555c324feeb9f20d87e675734ddecfc3990a7ceb157ad4ebd2

See more details on using hashes here.

File details

Details for the file promptlab-0.0.7-py3-none-any.whl.

File metadata

  • Download URL: promptlab-0.0.7-py3-none-any.whl
  • Upload date:
  • Size: 37.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for promptlab-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 4c9831019c1125021f290442080bc1ed01c8a3e3e06764d727bf3da4ad897dfc
MD5 ac34c731ef3c56ed3648347f2b6223d2
BLAKE2b-256 bbb19985e5bc302bd3af40de1a336346ce8942fd6699fa931487cdd7d473b98d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page