Skip to main content

Slim Runner for batched OpenAI Requests

Project description

OpenAI Request Runner

Pypi CI License: MIT Status

Twitter

A lightweight Python package designed to facilitate parallel processing of OpenAI API requests. This implementation is inspired by the OpenAI cookbook example but offers advanced customization capabilities and integration with OpenAI Functions (leaning on the great openai_function_call library). It ensures efficient and organized interactions with the OpenAI models. Features

  • Parallel Processing: Handle multiple OpenAI API requests concurrently.
  • Rate Limiting: Adheres to rate limits set by the OpenAI API.
  • Advanced Customization: Allows for detailed input preprocessing and API response postprocessing.
  • OpenAI Functions: Seamlessly integrates with OpenAI Functions for added capabilities.
  • Error Handling: Efficiently manage and log errors, including rate limit errors.
  • Extendable: Easily integrate with custom schemas and other extensions.

Installation

Using pip (wip)

pip install openai_request_runner

Git

pip install git@https://github.com/jphme/openai_request_runner

Using poetry

For local development and testing:

poetry install

Usage

Minimal example:

import asyncio
from openai_request_runner import process_api_requests_from_list

example_input = [{"id": 0, "prompt": "What is 1+1?"}]
results = asyncio.run(
    process_api_requests_from_list(
        example_input, system_prompt="Translate input to French"
    )
)
#or in a notebook:
#results = await process_api_requests_from_list(...

print(results[0]["content"])
# "Qu'est-ce que 1+1 ?"

See examples/classify_languages.py and examples/translate.py for detailed examples of how to use the package for advanced usecases.

The package allows for extensive customization. You can set your desired preprocessing function, postprocessing function, and other parameters to suit your specific needs.

Refer to the inline documentation and docstrings in the code for detailed information on each function and its parameters.

Run inside a notbook

If you want to run openai_request_runner inside a notebook, use nest_asyncio like this:

import nest_asyncio
nest_asyncio.apply()

Run Tests

poetry run pytest tests/

Contributing

Contributions are welcome! Please open an issue if you encounter any problems or would like to suggest enhancements. Pull requests are also appreciated.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_request_runner-0.0.8.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

openai_request_runner-0.0.8-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file openai_request_runner-0.0.8.tar.gz.

File metadata

  • Download URL: openai_request_runner-0.0.8.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.0 CPython/3.10.13 Linux/6.2.0-1015-azure

File hashes

Hashes for openai_request_runner-0.0.8.tar.gz
Algorithm Hash digest
SHA256 aa020b38fb35598e684ecfd3e070e5a36b0cfc43f943fd11dc9f253e31f84e95
MD5 b21cfc829ed9064c29786a1559094fc9
BLAKE2b-256 e0cb06fc39a56b01814cf124217edc8bdbf3e94db7b2bc265de5fbd2c5347380

See more details on using hashes here.

File details

Details for the file openai_request_runner-0.0.8-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_request_runner-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 c33ac4ac539dc9cc00dae5a3a83e5eced0688776460f3c5019c0bb5d5085f7b5
MD5 2a6985bd50f6736e8401d256e1cb83b7
BLAKE2b-256 e0f6eadc52438c558f782b4c9005672256c89e0e0009c150e7d545de9711bb27

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page