Skip to main content

Load balancer for asynchroneous requests to the APIs of OpenAI and Azure (if configured) for ChatGPT

Project description

Load Balancing ChatGPT (LBGPT)

Enhance your ChatGPT API experience with the LoadBalancing ChatGPT (LBGPT), a wrapper around OpenAI's API designed to boost performance, enable caching, and provide seamless integration with Azure's OpenAI API.

This tool significantly optimizes single request response times by asynchronously interacting with the OpenAI API and efficiently caching results. It also offers automatic retries in the event of API errors and the option to balance requests between OpenAI and Azure for an even more robust AI experience.

Proudly build by the team of Marvin Labs where we use AI to help financial analysts make better investment decisions.

Installation

You can easily install LoadBalancing ChatGPT via pip:

pip install lbgpt

Usage

Basic

Initiate asynchronous calls to the ChatGPT API using the following basic example:

import lbgpt
import asyncio

chatgpt = lbgpt.ChatGPT(api_key="YOUR_API_KEY")
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

The chat_completion_list function expects a list of dictionaries with fully-formed OpenAI ChatCompletion API requests. Refer to the OpenAI API definition for more details. You can also use the chat_completion function for single requests.

By default, LBGPT processes five requests in parallel, but you can adjust this by setting the max_concurrent_requests parameter in the constructor.

Caching

Take advantage of request caching to avoid redundant calls:

import lbgpt
import asyncio
import diskcache

cache = diskcache.Cache("cache_dir")
chatgpt = lbgpt.ChatGPT(api_key="YOUR_API_KEY", cache=cache)
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

While LBGPT is tested with diskcache, it should work seamlessly with any cache that implements the __getitem__ and __setitem__ methods.

Azure

For users with an Azure account and proper OpenAI services setup, lbgpt offers an interface for Azure, similar to the OpenAI API. Here's how you can use it:

import lbgpt
import asyncio

chatgpt = lbgpt.AzureGPT(api_key="YOUR_API_KEY", azure_api_base="YOUR AZURE API BASE", azure_model_map={"OPENAI_MODEL_NAME": "MODEL NAME IN AZURE"})
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

You can use the same request definition for both OpenAI and Azure. To ensure interchangeability, map OpenAI model names to Azure model names using the azure_model_map parameter in the constructor (see https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/switching-endpoints for details).

Load Balacing OpenAI and Azure

For optimal performance and reliability, it's recommended to set up the LoadBalancedGPT or MultiLoadBalancedGPT. These classes automatically balance requests between OpenAI and Azure, and they also offer caching and automatic retries.

LoadBalancedGPT offers load-balancing just between OpenAI and Azure models, but is slightly easier to set up. By default, 75% of requests are routed to the Azure API, while 25% go to the OpenAI API. You can customize this ratio by setting the ratio_openai_to_azure parameter in the constructor, taking into account that the Azure API is considerably faster.

import lbgpt
import asyncio

chatgpt = lbgpt.LoadBalancedGPT(
    openai_api_key="YOUR_OPENAI_API_KEY",
    azure_api_key="YOUR_AZURE_API_KEY",
    azure_api_base="YOUR AZURE API BASE",
    azure_model_map={"OPENAI_MODEL_NAME": "MODEL NAME IN AZURE"})
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

MultiLoadBalancedGPT offers load-balancing between multiple OpenAI and Azure models, and offers more flexibility in terms of the load balancing inputs. In order to achieve the same load balancing as the LoadBalancedGPT, you can use the following code:

import lbgpt
import asyncio

openai_chatgpt = lbgpt.ChatGPT(api_key="YOUR_API_KEY")
azure_chatgpt = lbgpt.AzureGPT(api_key="YOUR_API_KEY", azure_api_base="YOUR AZURE API BASE", azure_model_map={"OPENAI_MODEL_NAME": "MODEL NAME IN AZURE"})


chatgpt = lbgpt.MultiLoadBalancedGPT(
    gpts=[openai_chatgpt, azure_chatgpt],
    allocation_function_weights=[0.25, 0.75],
    allocation_function='random',
)
    
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

However, the MultiLoadBalancedGPT offers more flexibility in terms of the load balancing inputs, e.g. supporting multiple Azure instances or OpenAI keys.

You can also select the allocation function max_headroom to automatically pick the API with the most available capacity. This requires you to tell the model constructors your RPM (requests per minute) and/or TPM (tokens per minute) limits.

For example, if you have an OpenAI API key with a 5,000 TPM limit and an Azure API key with a 10,000 TPM limit, you can use the following code:

import lbgpt
import asyncio

openai_chatgpt = lbgpt.ChatGPT(api_key="YOUR_API_KEY", limit_tpm=5_000)
azure_chatgpt = lbgpt.AzureGPT(api_key="YOUR_API_KEY", azure_api_base="YOUR AZURE API BASE", azure_model_map={"OPENAI_MODEL_NAME": "MODEL NAME IN AZURE"}, limit_tpm=10_000)


chatgpt = lbgpt.MultiLoadBalancedGPT(
    gpts=[openai_chatgpt, azure_chatgpt],
    allocation_function='max_headroom',
)
    
res = asyncio.run(chatgpt.chat_completion_list([ "your list of prompts" ]))

How to Get API Keys

To obtain your OpenAI API key, visit the official OpenAI site. For Azure API key acquisition, please refer to the official Azure documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lbgpt-0.1.0.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

lbgpt-0.1.0-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file lbgpt-0.1.0.tar.gz.

File metadata

  • Download URL: lbgpt-0.1.0.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for lbgpt-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8a5826f18dc27518dfe2cd2843f77ec18f4b8c6d24308060fe3bb8e5a43a2032
MD5 068e1d16a5289f33322e10d92e838aa3
BLAKE2b-256 96145fa7c50f17e6fe1f3848e5e1de57fad79e4d4c25b413bdd9bba604465d41

See more details on using hashes here.

File details

Details for the file lbgpt-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: lbgpt-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for lbgpt-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 42feebb35705e259b91eb3b86b2b831da95b78741c6d4a3d18ed3a4b01754801
MD5 ae3ab73916fee08184ddeb0edccbe03d
BLAKE2b-256 cc91fb1fa29186540d5af5b78ade63c154d630e4f80ba2511eb4d5e745164dd6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page