A library for optimizing LLM tasks, including prompt engineering and bandit-based training.
Project description
TaskLLM
A library for optimizing LLM tasks, including prompt engineering and bandit-based training.
Project Overview
TaskLLM is a Python library designed to help developers optimize their interactions with Large Language Models (LLMs). It provides tools for:
- Instrumenting LLM tasks to track inputs, outputs, and quality metrics
- Making LLM calls with both simple text responses and structured outputs
- Optimizing prompts through bandit-based training algorithms
The library is particularly useful for developers who want to:
- Systematically improve prompt performance
- Track and analyze LLM interactions
- Convert unstructured LLM outputs into structured data
- Implement quality assessment for LLM outputs
Installation
TaskLLM requires Python 3.11 or higher.
# Install from PyPI, not yet available.
pip install taskllm
# Or install from source
git clone https://github.com/your-username/taskllm.git
cd taskllm
pip install -e .
Quick Start
Please see one of the examples for a quick start. It will show you how to train a prompt for a specific task.
Video to come out shortly!
Core Components
Optimization
The BanditTrainer class helps you find the best prompts for your tasks:
from taskllm.optimizer.methods import BanditTrainer
from taskllm.optimizer.data import DataSet, Row
trainer = BanditTrainer(
all_rows=your_dataset,
task_guidance="your task description",
keys=["input_field1", "input_field2"],
expected_output_type=YourOutputModel,
scoring_function=your_scoring_function
)
await trainer.train()
best_prompt = await trainer.get_best_prompt()
Examples
The repository includes several examples demonstrating how to use TaskLLM in different scenarios:
Jokes Example
Determines whether jokes are funny using a bandit-based prompt optimizer:
# From examples/jokes/run.py
trainer = BanditTrainer(
all_rows=dataset,
task_guidance="write a prompt that determines whether a joke is funny based on the category of joke",
keys=["joke"],
expected_output_type=IsJokeFunny,
scoring_function=funny_scoring_function
)
Tweet Sentiment Analysis
Analyzes the sentiment of tweets (positive, negative, or neutral):
# From examples/tweet_sentiment/run.py
trainer = BanditTrainer(
all_rows=dataset,
task_guidance="what is the sentiment of this tweet?",
keys=["tweet"],
expected_output_type=TweetSentiment,
scoring_function=sentiment_scoring_function
)
Starbucks Reviews
Rates Starbucks reviews on a scale of 1-5:
# From examples/starbucks/run.py
trainer = BanditTrainer(
all_rows=dataset,
task_guidance="determine the rating of this review",
keys=["review", "name", "location", "date"],
expected_output_type=StarbucksReviewRating,
scoring_function=sentiment_scoring_function,
prompt_mode=PromptMode.ADVANCED
)
To run any of these examples:
cd examples/[example_directory]
python run.py
Advanced Usage
Configuration Options
You can customize the LLM configuration:
from taskllm.ai import LLMConfig
custom_config = LLMConfig(
model="gpt-4o",
temperature=0.7,
max_tokens=2000,
top_p=0.95,
frequency_penalty=0.5,
presence_penalty=0.5
)
Quality Labeling
Enable interactive quality assessment for your tasks:
@instrument_task("your_task", enable_quality_labeling=True)
def your_function():
# After execution, you'll be prompted to rate the quality
Caching Strategies
TaskLLM automatically caches LLM responses to save time and costs:
# Disable caching for specific calls
response = await simple_llm_call(
messages=[...],
config=config,
use_cache=False
)
Custom Prompt Modes
Use advanced prompt modes for more sophisticated optimization:
from taskllm.optimizer.prompt.meta import PromptMode
trainer = BanditTrainer(
# ...other parameters
prompt_mode=PromptMode.ADVANCED
)
API Reference
Key Modules
taskllm.instrument: Functions for tracking and logging LLM taskstaskllm.ai: Interface for making LLM callstaskllm.optimizer: Tools for optimizing promptstaskllm.optimizer.data: Data structures for optimizationtaskllm.optimizer.methods: Optimization algorithmstaskllm.optimizer.prompt: Prompt management and templates
Contributing
Development Setup
-
Clone the repository
git clone https://github.com/your-username/taskllm.git cd taskllm
-
Create a virtual environment
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install development dependencies
pip install -e ".[dev]"
Testing
Run tests using:
python -m pytest
Contribution Guidelines
- Fork the repository
- Create a feature branch
- Add your changes
- Run tests
- Submit a pull request
License
MIT License
For more information, check out the examples directory or open an issue on GitHub.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file taskllm-0.1.0.tar.gz.
File metadata
- Download URL: taskllm-0.1.0.tar.gz
- Upload date:
- Size: 29.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f66e7daf4e2091561453da3a12b5b831bef7bfe1edfe394f206ad2f44cb3fb1
|
|
| MD5 |
0826f9b21c2345683c419b6fafe70368
|
|
| BLAKE2b-256 |
f0c806703c2a0152fc370bc98280c2ffdf5d4dd9d9bebf5890fb78ab1b9e5a86
|
Provenance
The following attestation bundles were made for taskllm-0.1.0.tar.gz:
Publisher:
workflow.yml on bllchmbrs/taskllm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
taskllm-0.1.0.tar.gz -
Subject digest:
2f66e7daf4e2091561453da3a12b5b831bef7bfe1edfe394f206ad2f44cb3fb1 - Sigstore transparency entry: 211620875
- Sigstore integration time:
-
Permalink:
bllchmbrs/taskllm@3f9be8f5c6121559956deb7bec2720525acc4a34 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bllchmbrs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@3f9be8f5c6121559956deb7bec2720525acc4a34 -
Trigger Event:
push
-
Statement type:
File details
Details for the file taskllm-0.1.0-py3-none-any.whl.
File metadata
- Download URL: taskllm-0.1.0-py3-none-any.whl
- Upload date:
- Size: 30.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d0d985be47770572616ab914f2858919f7d28b6e2a486d3a6a680dfb3b9ed76d
|
|
| MD5 |
ef9388503cc5385518fbf04f3d26b7b8
|
|
| BLAKE2b-256 |
c459eae88833e8efac424b26c47d5fd22eac83c718c2c4c4ad793891ee50ebf7
|
Provenance
The following attestation bundles were made for taskllm-0.1.0-py3-none-any.whl:
Publisher:
workflow.yml on bllchmbrs/taskllm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
taskllm-0.1.0-py3-none-any.whl -
Subject digest:
d0d985be47770572616ab914f2858919f7d28b6e2a486d3a6a680dfb3b9ed76d - Sigstore transparency entry: 211620882
- Sigstore integration time:
-
Permalink:
bllchmbrs/taskllm@3f9be8f5c6121559956deb7bec2720525acc4a34 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bllchmbrs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@3f9be8f5c6121559956deb7bec2720525acc4a34 -
Trigger Event:
push
-
Statement type: