OpenAI API proxy for fine-grained cost tracking & control and caching of responses
Project description
OpenAI Proxy [openai-wrapi
]
A drop-in wrapper to the openai
package that tracks costs per user, project, model and staging account.
Problem statement
OpenAI does not currently provide any way to monitor or limit API usage costs by user*, project or model. In fact, there is no concept of "project", only users (which correspond to email addresses), organizations (which correspond to OpenAI accounts and must be individually funded) and API keys (which can be used interchangeably across any organizations to which a user belongs).
This leads to a proliferation of API keys and users opening up a wider attack surface from a security point of view. Furthermore, users cannot be forced to use MFA and may continue to use the API and create API keys, even if their email no longer exists.
Lastly, it is easy to make redundant calls to the API incurring unnecessary costs, especially when developing in an interactive environment such as a Jupyter notebook.
* The latest version of the OpenAI usage dashboard shows number of calls per user, but not cost.
Solution
This repo provides a wrapper which checks usage limits before passing on the request to the OpenAI API and records the usage costs per user, project, model and staging account. It leverages the IAM permission framework of AWS to control access to the OpenAI API, without exposing the unique API keys per staging account. Responses from the OpenAI API are cached by default. Infrastructure As Code (IAC) is given to deploy the solution using a serverless architecture in AWS at a minimal extra cost and latency.
Install
From PyPI (for users)
pip install openai-wrapi
From source (for admins)
git clone https://github.com/teticio/openai-proxy.git
cd openai-proxy
pip install .
Deploy
Ideally, you should have one OpenAI account per staging account (dev, prod). Create a terraform.tfvars
file in the iac
directory with the following variables:
profile = "default" # AWS profile to use
region = "eu-west-2" # AWS region to deploy to
num_azs = 3 # Number of availability zones to deploy to (limited by available Elastic IP addresses)
use_elasticache = true # Whether to use ElastiCache Memcache
stages = { # Staging accounts
"dev" = {
openai_api_key = "sk-XXX"
openai_org_id = "org-XXX"
}
"prod" = {
openai_api_key = "sk-YYY"
openai_org_id = "org-YYY"
}
}
To deploy run:
cd iac
terraform init
terraform apply -auto-approve
This will create
- A streaming Lambda function URL to proxy calls to the OpenAI API per staging account (dev, prod).
- A Lambda function to set usage limits and flush the cache per staging account (dev, prod).
- A DynamoDB table to store usage and limits.
- An optional ElastiCache Memcached cluster to cache OpenAI API responses.
Usage
In order to use the proxy in your Python code, provided you have the appropriate IAM permissions, include
import openai_wrapi as openai
You no longer need set your OpenAI API key or organization ID as these are securely stored in the corresponding Lambda functions. Instead, you should set your OPENAI_API_KEY
environment variable to be sk-XXX
, where XXX
corresponds to the URL https://XXX.lambda-url.region.on.aws/
of your Lambda function, as output by Terraform.
If you plan to use packages such as langchain
which use the openai
package internally, you need only ensure you have previously imported openai_wrapi
.
By default, the project associated with any API calls will be N/A
. In order to set the project name:
openai.set_project("my-project")
To set the staging account:
openai.set_staging("dev")
If you want to disable caching (enabled by default):
openai.set_caching(False)
Alternatively, you can create a client using
client = openai.OpenAIProxy(project="my-project", staging="dev", caching=True)
It is also possible to set these parameters using environment variables:
os.environ["OPENAI_DEFAULT_PROJECT"] = "hello"
os.environ["OPENAI_DEFAULT_STAGING"] = "dev"
os.environ["OPENAI_DEFAULT_CACHING"] = "0"
(Note that it is necessary to do this when using the latest version of langchain
, which now explicitly creates a new OpenAI client.)
If you want to use the proxy from somewhere other than Python, you can use the URL of the Lambda function in place of the OpenAI endpoint, provided you authenticate with AWS appropriately. In fact, you can even make the Lambda function URL public and restrict the access with CORS, so that it can be used directly in a frontend application.
Admins
Again, supposing you have the IAM permissions to be able to invoke the openai-admin-{staging}
Lambda function, you can
- set the usage limits per user, project and model:
openai.set_limits(
limit=10, # 10 USD
staging="dev", # Dev account
project="my-project", # Project name
user="me", # Optional
model="gpt-4", # Optional
)
- flush the cache:
openai.flush_cache(staging="dev")
Note that this wrapper currently works for major versions 0 and 1 of the openai
package.
To see the usage in a dashboard, run
streamlit run app.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file openai_wrapi-0.1.5.tar.gz
.
File metadata
- Download URL: openai_wrapi-0.1.5.tar.gz
- Upload date:
- Size: 8.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.6 importlib-metadata/5.2.0 keyring/23.13.1 pkginfo/1.9.6 readme-renderer/34.0 requests-toolbelt/0.10.1 requests/2.31.0 rfc3986/1.5.0 tqdm/4.65.0 urllib3/1.26.16 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 379467c75df4f40b766327b6cf4d2b83f4d3820efb8930fbf1a3e01d296be3f9 |
|
MD5 | cdf091e08d3afc2b215b4d6aa4842b16 |
|
BLAKE2b-256 | 81b5ab943a71d38b73fada5c15a25a0c56df1cf09541495191a54267001b09c1 |
File details
Details for the file openai_wrapi-0.1.5-py3-none-any.whl
.
File metadata
- Download URL: openai_wrapi-0.1.5-py3-none-any.whl
- Upload date:
- Size: 7.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.6 importlib-metadata/5.2.0 keyring/23.13.1 pkginfo/1.9.6 readme-renderer/34.0 requests-toolbelt/0.10.1 requests/2.31.0 rfc3986/1.5.0 tqdm/4.65.0 urllib3/1.26.16 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 786e8cc7bd6e4eb7750e96e0edcfbc1d5164712a3298445e586a06b69cfea495 |
|
MD5 | 443e2b0abc4bb15c6726ac139c9d9bfb |
|
BLAKE2b-256 | 2c537bbe905a60201641f96b4c8df6b3efd9c8c8ab03286c2db8b580a3cda5fc |