A library to parametrize multiple API calls for LLMs
Project description
LLM Parametrizer
A Python script to generate parametrized variations of prompts and get results from API calls to LLMs.
Currently only support for OpenAI’s ChatGPT is available.
Rationale
LLM respones are unpredictable and multiple tries are required to achieve the desired results when experimenting with prompts. This is a tedious process and difficult to document if done haphazardly.
This script aims at easing the process of experimenting with prompts. More importantly it aims at automatically documenting the process, making it easy to keep track of which prompts have which effects. All this while speeding up the process by parametrization and asynchrous API calls.
Dependencies
- openai
- python-dotenv
Features
Test an API call with various parametrized values:
-
Prompts
-
Roles
-
Models
-
Temperatures
\
Parameters not yet implemented include:
- Seeds
- Frequency penalties
- Presence penalties
- Top p
Usage
First, initialize LLM_Parametrizer
:
from llm_parametrizer import GPT_MODEL, LLMParametrizer
prmtrzr = LLMParametrizer()
Make sure you have a .env file with the OPEN_AI_API_KEY variable pointing to your OpenAI API key:
OPEN_AI_API_KEY=sk-proj-<your API key here>
prmtrzr.initialize_OpenAI()
Alternatively, pass your OpenAI key when initializing.
prmtrzr.initialize_OpenAI("sk-proj-<your API key here>")
You can then add prompts, models, and temperatures:
prmtrzr.add_prompts("Write a single letter of your choice")
prmtrzr.add_models(GPT_MODEL.GPT_4o.value, GPT_MODEL.GPT_3_5_T.value)
prmtrzr.add_temperatures(0.5, 1.0, 2)
The above code would generate 6 prompts (2 models times 3 temperatures).
With prmtrzr.show_parameters()
you can print the parameters that have been so far added:
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 0.5 Model: gpt-4o
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 1.0 Model: gpt-4o
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 2 Model: gpt-4o
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 0.5 Model: gpt-3.5-turbo
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 1.0 Model: gpt-3.5-turbo
Prompt user: 'Write a single letter of your choice' Prompt system: 'You are a helpful assistant.' Temperature: 2 Model: gpt-3.5-turbo\
Use prmtrzr.show_parameters(show_raw=True)
to output the full JSON that would be passed to the OpenAI API call. Use prmtrzr.return_parameters()
or prmtrzr.return_parameters(return_raw=True)
to return the values instead of printing them.
Finally, you can run the parameterized API calls with:
results = prmtrzr.run()
The run
method returns a prettyfied string which includes the responses. So printing results
with print(results)
looks like this:
Prompt: 'Write a single letter of your choice' Temperature: 0.5 Model: gpt-4o Date: 2024-05-18-19-59-01 Response: A
Prompt: 'Write a single letter of your choice' Temperature: 1.0 Model: gpt-4o Date: 2024-05-18-19-59-01 Response: A
Prompt: 'Write a single letter of your choice' Temperature: 2 Model: gpt-4o Date: 2024-05-18-19-59-01 Response: L
Prompt: 'Write a single letter of your choice' Temperature: 0.5 Model: gpt-3.5-turbo Date: 2024-05-18-19-59-01 Response: A
Prompt: 'Write a single letter of your choice' Temperature: 1.0 Model: gpt-3.5-turbo Date: 2024-05-18-19-59-01 Response: G
Prompt: 'Write a single letter of your choice' Temperature: 2 Model: gpt-3.5-turbo Date: 2024-05-18-19-59-01 Response: E\
To get the raw data use:
results = prmtrzr.run(return_raw=True)
To output a csv file (viewable in google sheets for example) use:
results = prmtrzr.run(output_csv=True)
This will save a csv file viewable in google sheets or similar software:
Prompt,Temperature,Model,Time,Response Write a single letter of your choice,0.5,gpt-4o,2024-05-18-19-59-01,A Write a single letter of your choice,1.0,gpt-4o,2024-05-18-19-59-01,A Write a single letter of your choice,2,gpt-4o,2024-05-18-19-59-01,L Write a single letter of your choice,0.5,gpt-3.5-turbo,2024-05-18-19-59-01,A Write a single letter of your choice,1.0,gpt-3.5-turbo,2024-05-18-19-59-01,G Write a single letter of your choice,2,gpt-3.5-turbo,2024-05-18-19-59-01,E
License
This project is licensed under the terms of the MIT license.
Todo
- Add parameters:
- Seeds
- Frequency penalties
- Presence penalties
- Top p
- Implement APIs for other services other than OpenAI
- Implement JSON mode and function calling.
- DeepEval integration: https://github.com/confident-ai/deepeval
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm_parametrizer-0.1.1.tar.gz
.
File metadata
- Download URL: llm_parametrizer-0.1.1.tar.gz
- Upload date:
- Size: 5.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 55c35ff7069492d4abd2c64326b9e1c39569efd314f2f2991a18a31802a13344 |
|
MD5 | de91248fa4e4a391e9a9c59a00a28880 |
|
BLAKE2b-256 | e5d9bc503d4d95ed280ea248a27bc521b83ccbab9ec8ee042cfd00c2f7d0f250 |
File details
Details for the file llm_parametrizer-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: llm_parametrizer-0.1.1-py3-none-any.whl
- Upload date:
- Size: 5.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 88e898493a58170c23fe8b35d5212aaf8f84bc5f7eec3d06304569119d11dce1 |
|
MD5 | 49946e3a3ae0fa9ee6a9f8657b452854 |
|
BLAKE2b-256 | e3cbe4079a786e99c0f2f9463879bba7cc7d7e76b3d1e24e05c4f4ab88367641 |