Skip to main content

Python library for asynchronous interactions with the OpenAI API, enabling concurrent request handling. It simplifies building scalable, AI-powered applications by offering efficient, rate-limited access to OpenAI services. Perfect for developers seeking to integrate OpenAI's capabilities with minimal overhead.

Project description

Concurrent OpenAI Manager

The Concurrent OpenAI Manager is a pure Python library meticulously designed for developers seeking an optimal integration with OpenAI's APIs. This library is engineered to handle API requests with efficiency, ensuring compliance with rate limits and managing system resources effectively, all while providing transparent cost estimations for OpenAI services.

Key features

Rate limiting

Central to the library is a carefully crafted rate limiter, capable of managing the number of requests and tokens per minute. This ensures your application stays within OpenAI's usage policies, avoiding rate limit violations and potential service disruptions.

Throttled Request Dispatching

The throttling mechanism is designed to prevent sudden surges of requests, spreading them evenly over time. This ensures a steady and predictable load on OpenAI's endpoints, contributing to a responsible utilization of API resources and avoiding the 429 errors that might occur if we simply do all the requests at once.

Semaphore for Concurrency Control

To manage local system resources or limit parallelism, the library incorporates a semaphore mechanism. This allows developers to specify the maximum number of concurrent operations, ensuring balanced resource utilization and a responsive application performance. Useful when you want tot manage local resources (such as database connections or memory usage) or wish to limit parallelism to ensure a responsive user experience. By fine-tuning the semaphore value, you have control on the amount of coroutines that are on the Event Loop.

Cost Estimation

A notable feature of the Concurrent OpenAI Manager is its built-in cost estimation. This functionality provides users with detailed insights into the cost implications of their API requests, including a breakdown of prompt and completion tokens used. Such transparency empowers users to manage their budget effectively and optimize their use of OpenAI's APIs.

Getting started

Integrating the Concurrent OpenAI Manager into your project is straightforward:

$ pip install concurrent-openai

Usage

  1. Create a .env file in your project directory.
  2. Add an env variable named OPENAI_API_KEY.
  3. Test it out:
from concurrent_openai import process_completion_requests

results = await process_completion_requests(
    prompts=[{"role": "user", "content": "Knock, knock!"}],
    model="gpt-4-0613",
    temperature=0.7,
    max_tokens=150,
    max_concurrent_requests=5,
    token_safety_margin=10,
)

for result in results:
    if result:
        print(result)
    else:
        print("Error processing request.")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

concurrent_openai-0.2.0.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

concurrent_openai-0.2.0-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file concurrent_openai-0.2.0.tar.gz.

File metadata

  • Download URL: concurrent_openai-0.2.0.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.9.15 Darwin/23.4.0

File hashes

Hashes for concurrent_openai-0.2.0.tar.gz
Algorithm Hash digest
SHA256 5dbeadf8550b026d62e3f62056a2b8ef1ba54582cd945e326c90b15a40619ef4
MD5 a29099720fa338d1a49f0c8e5e92e052
BLAKE2b-256 418a8cc7e27db01970c4ab2060f2fbba33bcda289b09ed49b9e9af3ca1baaac9

See more details on using hashes here.

File details

Details for the file concurrent_openai-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for concurrent_openai-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5a9f8df7af20208c40dc661c4b8a1cc759bcc1c2eab6e85c2b47c15fb4d9f17b
MD5 7ab40142f289e28f70bcd5513d6898aa
BLAKE2b-256 bd63af66fa8559207ba1b319b99d35d871c76700f794a0eb60c0755fdf984277

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page