Use only one line of code to call multiple model APIs similar to ChatGPT. Currently supported: Azure OpenAI Resource endpoint API, OpenAI Official API, and Anthropic Claude series model API.
Project description
OneAPI
Easily access multiple ChatGPT or similar APIs with just one line of code/command.
Save a significant amount of ☕️ time by avoiding the need to read multiple API documents and test them individually.
The currently supported APIs include:
- OpenAI Official API.
- Microsoft Azure OpenAI Resource endpoint API.
- Anthropic Claude series model API.
Installation
pip install -U one-api-tool
Usage
1. (Recommended method) Set your key information in the local configuration file.
OpenAI config:
{
"api_key": "YOUR_API_KEY",
"api": "https://api.openai.com/v1",
"api_type": "open_ai"
}
Azure OpenAI config:
{
"api_key": "YOUR_API_KEY",
"api": "Replace with your Azure OpenAI resource's endpoint value.",
"api_type": "azure"
}
Anthropic config:
{
"api_key": "YOUR_API_KEY",
"api": "https://api.anthropic.com",
"api_type": "claude"
}
api_key
: Obtain OpenAI API key from the OpenAI website and Claude API key from the Anthropic website.
api
: The base API used to send requests. You may also specify a proxy URL like: "https://your_proxy_domain/v1". For Azure APIs, you can find relevant information on the Azure resource dashboard. The API format is usually: https://{your_organization}.openai.azure.com/
.
api_type
: Currently supported values are "open_ai", "azure", or "claude".
Initialize the OneAPITool
object from a local configuration file:
from oneapi import OneAPITool
res = OneAPITool.from_config_file("your_config_file.json").simple_chat("Hello AI!")
print(res)
2. (Not recommended) Write the configuration directly into the code
from oneapi import OneAPITool
res = OneAPITool.from_config(api_key, api, api_type).simple_chat("Hello AI!")
print(res)
3. Usage with command line
open-api --config_file CHANGE_TO_YOUR_CONFIG_PATH \
--model gpt-3.5-turbo \
--prompt "1+1=?"
Output detail
-------------------- prompt detail 🚀 --------------------
1+1=?
-------------------- prompt end --------------------
-------------------- gpt-3.5-turbo response ⭐️ --------------------
2
-------------------- response end --------------------
Arguments detail:
--config_file
string ${\color{orange}\text{Required}}$
A local configuration file containing API key information.
--prompt
string ${\color{orange}\text{Required}}$
The question that would be predicted by LLMs, e.g., A math question would be like: "1+1=?".
--model
string ${\color{grey}\text{Optional}}$ Defaults to GPT-3.5-turbo or Claude-v1.3 depends on api_type
Which model to use, e.g., gpt-4.
--temperature
number ${\color{grey}\text{Optional}}$ Defaults to 1
What sampling temperature to use. Higher values like 0.9 will make the output more random, while lower values like 0.1 will make it more focused and deterministic.
--max_new_tokens
integer ${\color{grey}\text{Optional}}$ Defaults to 2048
The maximum number of tokens to generate in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
ToDo
- Batch requests.
- Token number counting.
- Custom token budget.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for one_api_tool-0.2.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f93249ca05244d61adfce74733e559f4d0da57865ccb636ccce49b0a3c98a4b5 |
|
MD5 | c68c55c99c69cb66b5279ff7fb3c8c4e |
|
BLAKE2b-256 | a98a7e24adf03269a3545fecc5ed6dbdc2305b8ccda42faf42e65f6f33ba8f8b |