🚀 OpenAI-Forward 是一个专为大型语言模型设计的高级转发代理,提供包括用户请求速率控制、Token速率限制和自定义API密钥等增强功能。该服务可用于代理本地模型和云端模型。OpenAI API Reverse Proxy
Project description
English | 简体中文
OpenAI Forward
OpenAI-Forward is an efficient forwarding service designed for large language models. Its core features include user request rate control, Token rate limits, intelligent prediction caching, log management, and API key management, aiming to provide a fast and convenient model forwarding service. Whether proxying local language models or cloud-based language models, such as LocalAI or OpenAI, OpenAI Forward facilitates easy implementation. With the support of libraries like uvicorn, aiohttp, and asyncio, OpenAI-Forward achieves impressive asynchronous performance.
Key Features
OpenAI-Forward offers the following capabilities:
- Universal Forwarding: Supports forwarding of almost all types of requests.
- Performance First: Boasts outstanding asynchronous performance.
- Cache AI Predictions: Caches AI predictions, accelerating service access and saving costs.
- User Traffic Control: Customize request and Token rates.
- Real-time Response Logs: Enhances observability of the call chain.
- Custom Secret Keys: Replaces the original API keys.
- Multi-target Routing: Forwards to multiple service addresses under a single service to different routes.
- Automatic Retries: Ensures service stability; will automatically retry on failed requests.
- Quick Deployment: Supports fast deployment locally or on the cloud via pip and docker.
Proxy services set up by this project include:
-
Original OpenAI Service Address:
https://api.openai-forward.com
https://render.openai-forward.com -
Cached Service Address (User request results will be saved for some time):
Deployment Guide
User Guide
Quick Start
Installation
pip install openai-forward
Starting the Service
aifd run
If the configuration from the .env
file at the root path is read, you will see the following startup information.
❯ aifd run
╭────── 🤗 openai-forward is ready to serve! ───────╮
│ │
│ base url https://api.openai.com │
│ route prefix / │
│ api keys False │
│ forward keys False │
│ cache_backend MEMORY │
╰────────────────────────────────────────────────────╯
╭──────────── ⏱️ Rate Limit configuration ───────────╮
│ │
│ backend memory │
│ strategy moving-window │
│ global rate limit 100/minute (req) │
│ /v1/chat/completions 100/2minutes (req) │
│ /v1/completions 60/minute;600/hour (req) │
│ /v1/chat/completions 60/second (token) │
│ /v1/completions 60/second (token) │
╰────────────────────────────────────────────────────╯
INFO: Started server process [191471]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Proxy OpenAI Model:
The default option for aifd run
is to proxy https://api.openai.com
.
The following uses the set up service address https://api.openai-forward.com
as an example.
Click to expand
Use in Third-party Applications
Integrate within the open-source project ChatGPT-Next-Web:
Replace the BASE_URL
in the Docker startup command with the address of your self-hosted proxy service.
docker run -d \
-p 3000:3000 \
-e OPENAI_API_KEY="sk-******" \
-e BASE_URL="https://api.openai-forward.com" \
-e CODE="******" \
yidadaa/chatgpt-next-web
Integrate within Code
Python
import openai
+ openai.api_base = "https://api.openai-forward.com/v1"
openai.api_key = "sk-******"
JS/TS
import { Configuration } from "openai";
const configuration = new Configuration({
+ basePath: "https://api.openai-forward.com/v1",
apiKey: "sk-******",
});
gpt-3.5-turbo
curl https://api.openai-forward.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-******" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Image Generation (DALL-E)
curl --location 'https://api.openai-forward.com/v1/images/generations' \
--header 'Authorization: Bearer sk-******' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "A photo of a cat",
"n": 1,
"size": "512x512"
}'
Proxy Local Model
-
Applicable scenarios: To be used in conjunction with projects such as LocalAI and api-for-open-llm.
-
How to operate: Using LocalAI as an example, if the LocalAI service has been deployed at http://localhost:8080, you only need to set
OPENAI_BASE_URL=http://localhost:8080
in the environment variable or in the .env file. After that, you can access LocalAI through http://localhost:8000.
(More)
Proxy Other Cloud Models
- Applicable scenarios: For instance, through LiteLLM, you can convert the API format of many cloud models to the OpenAI API format and then use this service as a proxy.
(More)
Configuration
Command Line Arguments
Execute aifd run --help
to get details on arguments.
Click for more details
Configuration | Description | Default Value |
---|---|---|
--port | Service port | 8000 |
--workers | Number of working processes | 1 |
Environment Variable Details
You can create a .env file in the project's run directory to customize configurations. For a reference configuration, see the .env.example file in the root directory.
Environment Variable | Description | Default Value |
---|---|---|
OPENAI_BASE_URL | Set base address for OpenAI-style API | https://api.openai.com |
OPENAI_ROUTE_PREFIX | Define a route prefix for the OPENAI_BASE_URL interface address | / |
OPENAI_API_KEY | Configure API key in OpenAI style, supports using multiple keys separated by commas | None |
FORWARD_KEY | Set a custom key for proxying, multiple keys can be separated by commas. If not set (not recommended), it will directly use OPENAI_API_KEY |
None |
EXTRA_BASE_URL | Configure the base URL for additional proxy services | None |
EXTRA_ROUTE_PREFIX | Define the route prefix for additional proxy services | None |
REQ_RATE_LIMIT | Set the user request rate limit for specific routes (user distinguished) | None |
GLOBAL_RATE_LIMIT | Configure a global request rate limit applicable to routes not specified in REQ_RATE_LIMIT |
None |
RATE_LIMIT_STRATEGY | Choose a rate limit strategy, options include: fixed-window, fixed-window-elastic-expiry, moving-window | None |
TOKEN_RATE_LIMIT | Limit the output rate of each token (or SSE chunk) in a streaming response | None |
PROXY | Set HTTP proxy address | None |
LOG_CHAT | Toggle chat content logging for debugging and monitoring | false |
CACHE_BACKEND | Cache backend, supports memory backend and database backend. By default, it's memory backend, optional database backends are lmdb, rocksdb, and leveldb | MEMORY |
CACHE_CHAT_COMPLETION | Whether to cache /v1/chat/completions results | false |
Detailed configuration descriptions can be seen in the .env.example file. (To be completed)
Note: If you set OPENAI_API_KEY but did not set FORWARD_KEY, clients will not need to provide a key when calling. As this may pose a security risk, it's not recommended to leave FORWARD_KEY unset unless there's a specific need.
Caching
By default, caching uses a memory backend. You can choose a database backend but need to install the corresponding environment:
pip install openai-forward[lmdb] # lmdb backend
pip install openai-forward[leveldb] # leveldb backend
pip install openai-forward[rocksdb] # rocksdb backend
- Configure
CACHE_BACKEND
in the environment variable to use the respective database backend for storage. Options areMEMORY
,LMDB
,ROCKSDB
, andLEVELDB
. - Set
CACHE_CHAT_COMPLETION
totrue
to cache /v1/chat/completions results.
import openai
openai.api_base = "https://smart.openai-forward.com/v1"
openai.api_key = "sk-******"
completion = openai.ChatCompletion.create(
+ caching=False, # Cache by default, can be set to not cache
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
]
)
Custom Keys
Click for more details
Configure OPENAI_API_KEY and FORWARD_KEY, for example:
OPENAI_API_KEY=sk-*******
FORWARD_KEY=fk-****** # Here, the fk-token is customized
Use case:
import openai
+ openai.api_base = "https://api.openai-forward.com/v1"
- openai.api_key = "sk-******"
+ openai.api_key = "fk-******"
Multi-Target Service Forwarding
Supports forwarding services from different addresses to different routes under the same port. Refer to the .env.example
for examples.
Conversation Logs
Chat logs are not recorded by default. If you wish to enable it, set the LOG_CHAT=true
environment variable.
Click for more details
Logs are saved in the current directory under Log/openai/chat/chat.log
. The recording format is:
{'messages': [{'role': 'user', 'content': 'hi'}], 'model': 'gpt-3.5-turbo', 'stream': True, 'max_tokens': None, 'n': 1, 'temperature': 1, 'top_p': 1, 'logit_bias': None, 'frequency_penalty': 0, 'presence_penalty': 0, 'stop': None, 'user': None, 'ip': '127.0.0.1', 'uid': '2155fe1580e6aed626aa1ad74c1ce54e', 'datetime': '2023-10-17 15:27:12'}
{'assistant': 'Hello! How can I assist you today?', 'is_function_call': False, 'uid': '2155fe1580e6aed626aa1ad74c1ce54e'}
To convert to json
format:
aifd convert
You'll get chat_openai.json
:
[
{
"datetime": "2023-10-17 15:27:12",
"ip": "127.0.0.1",
"model": "gpt-3.5-turbo",
"temperature": 1,
"messages": [
{
"user": "hi"
}
],
"functions": null,
"is_function_call": false,
"assistant": "Hello! How can I assist you today?"
}
]
Backer and Sponsor
License
OpenAI-Forward is licensed under the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for openai_forward-0.6.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ca70cebd271ffae2df831133866ce9846ff2b74afb5396ef1dc04d4a602811c8 |
|
MD5 | 402b7090009e26b312bdee914a3d8979 |
|
BLAKE2b-256 | 9554ff4c1361df1428bfb20482d670ef5a5b632765670cbcba06cba65ff04565 |