Skip to main content

A package for batch processing with OpenAI API.

Project description

Certainly! Here's a clean and comprehensive README for your GPTBatcher tool, formatted in Markdown:

# GPT Batcher

A simple tool to batch process messages using OpenAI's GPT models. `GPTBatcher` allows for efficient handling of multiple requests simultaneously, ensuring quick responses and robust error management.

## Installation

To get started with `GPTBatcher`, clone this repository to your local machine. Navigate to the repository directory and install the required dependencies (if any) by running:

```bash
pip install -r requirements.txt

Quick Start

To use GPTBatcher, you need to instantiate it with your OpenAI API key and the model name you wish to use. Here's a quick guide:

Handling Message Lists

This example demonstrates how to send a list of questions and receive answers:

from gpt_batch.batcher import GPTBatcher

# Initialize the batcher
batcher = GPTBatcher(api_key='your_key_here', model_name='gpt-3.5-turbo-1106')

# Send a list of messages and receive answers
result = batcher.handle_message_list(['question_1', 'question_2', 'question_3', 'question_4'])
print(result)
# Expected output: ["answer_1", "answer_2", "answer_3", "answer_4"]

Handling Embedding Lists

This example shows how to get embeddings for a list of strings:

from gpt_batch.batcher import GPTBatcher

# Reinitialize the batcher for embeddings
batcher = GPTBatcher(api_key='your_key_here', model_name='text-embedding-3-small')

# Send a list of strings and get their embeddings
result = batcher.handle_embedding_list(['question_1', 'question_2', 'question_3', 'question_4'])
print(result)
# Expected output: ["embedding_1", "embedding_2", "embedding_3", "embedding_4"]

Configuration

The GPTBatcher class can be customized with several parameters to adjust its performance and behavior:

  • api_key (str): Your OpenAI API key.
  • model_name (str): Identifier for the GPT model version you want to use, default is 'gpt-3.5-turbo-1106'.
  • system_prompt (str): Initial text or question to seed the model, default is empty.
  • temperature (float): Adjusts the creativity of the responses, default is 1.
  • num_workers (int): Number of parallel workers for request handling, default is 64.
  • timeout_duration (int): Timeout for API responses in seconds, default is 60.
  • retry_attempts (int): How many times to retry a failed request, default is 2.
  • miss_index (list): Tracks indices of requests that failed to process correctly.

For more detailed documentation on the parameters and methods, refer to the class docstring.

License

Specify your licensing information here.


This README provides clear instructions on how to install and use the `GPTBatcher`, along with detailed explanations of its configuration parameters. Adjust the "License" section as necessary based on your project's licensing terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt_batch-0.1.2.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

gpt_batch-0.1.2-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file gpt_batch-0.1.2.tar.gz.

File metadata

  • Download URL: gpt_batch-0.1.2.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for gpt_batch-0.1.2.tar.gz
Algorithm Hash digest
SHA256 febd7a63913d6160c75b5b4265f919b1e7fc99a49e5a4bddc4c6bf848e4cbbfd
MD5 9cf2715393240af5b3daa6362590d6d2
BLAKE2b-256 c217d2f4cfa7b733ad4f1e4d27750002ef7ad5f282d32934bfdeea8c7519b9aa

See more details on using hashes here.

File details

Details for the file gpt_batch-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: gpt_batch-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for gpt_batch-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ed17d17c605561ac6af497f9db70b91c18b42f3f6b6421710e448cf5ddcb6155
MD5 160a99f9edbdeb9e0e3f909bd26505ee
BLAKE2b-256 5ceae478cb466a6c518f21640bf4b8e6f148ecd6dc0be9e9a5be459e3f463c20

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page