Skip to main content

Slim Runner for batched OpenAI Requests

Project description

OpenAI Request Runner

Pypi CI License: MIT Status

Twitter

A lightweight Python package designed to facilitate parallel processing of OpenAI API requests. This implementation is inspired by the OpenAI cookbook example but offers advanced customization capabilities and integration with OpenAI Functions (leaning on the great openai_function_call library). It ensures efficient and organized interactions with the OpenAI models. Features

  • Parallel Processing: Handle multiple OpenAI API requests concurrently.
  • Rate Limiting: Adheres to rate limits set by the OpenAI API.
  • Advanced Customization: Allows for detailed input preprocessing and API response postprocessing.
  • OpenAI Functions: Seamlessly integrates with OpenAI Functions for added capabilities.
  • Error Handling: Efficiently manage and log errors, including rate limit errors.
  • Extendable: Easily integrate with custom schemas and other extensions.

Installation

Using pip (wip)

pip install openai_request_runner

Git

pip install git@https://github.com/jphme/openai_request_runner

Using poetry

For local development and testing:

poetry install

Usage

Minimal example:

from openai_request_runner import run_openai_requests

example_input = [{"id": 0, "prompt": "What is 1+1?"}]
results = run_openai_requests(
        example_input, system_prompt="Translate input to French"
    )

#in a notebook env you need nest_asyncio, see below

print(results[0]["content"])
# "Qu'est-ce que 1+1 ?"

See examples/classify_languages.py and examples/translate.py for detailed examples of how to use the package for advanced usecases.

The package allows for extensive customization. You can set your desired preprocessing function, postprocessing function, and other parameters to suit your specific needs.

Refer to the inline documentation and docstrings in the code for detailed information on each function and its parameters.

Run inside a notbook

If you want to run openai_request_runner inside a notebook, use nest_asyncio like this:

import nest_asyncio
nest_asyncio.apply()

Run Tests

poetry run pytest tests/

Contributing

Contributions are welcome! Please open an issue if you encounter any problems or would like to suggest enhancements. Pull requests are also appreciated.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_request_runner-0.1.0.tar.gz (12.0 kB view hashes)

Uploaded Source

Built Distribution

openai_request_runner-0.1.0-py3-none-any.whl (11.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page