A short wrapper of the OpenAI api call.
Project description
中文文档移步这里。
Openai API call
A simple wrapper for OpenAI API, which can be used to send requests and get responses.
Installation
pip install openai-api-call --upgrade
Usage
Set API Key
import openai_api_call as apicall
apicall.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Or set OPENAI_API_KEY in ~/.bashrc to avoid setting the API key every time:
# Add the following code to ~/.bashrc
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Also, you might set different api_key for each Chat object:
from openai_api_call import Chat
chat = Chat("hello")
chat.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Set Proxy (Optional)
from openai_api_call import proxy_on, proxy_off, proxy_status
# Check the current proxy
proxy_status()
# Set proxy(example)
proxy_on(http="127.0.0.1:7890", https="127.0.0.1:7890")
# Check the updated proxy
proxy_status()
# Turn off proxy
proxy_off()
Alternatively, you can use a proxy URL to send requests from restricted network, as shown below:
from openai_api_call import request
# set request url
request.base_url = "https://api.example.com"
You can set OPENAI_BASE_URL in ~/.bashrc as well.
Basic Usage
Example 1, send prompt and return response:
from openai_api_call import Chat, show_apikey
# Check if API key is set
show_apikey()
# Check if proxy is enabled
proxy_status()
# Send prompt and return response
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse(update=False) # Not update the chat history, default to True
Example 2, customize the message template and return the information and the number of consumed tokens:
import openai_api_call as apicall
# Customize the sending template
apicall.default_prompt = lambda msg: [
{"role": "system", "content": "帮我翻译这段文字"},
{"role": "user", "content": msg}
]
chat = Chat("Hello!")
# Set the number of retries to Inf
# The timeout for each request is 10 seconds
response = chat.getresponse(temperature=0.5, max_requests=-1, timeout=10)
print("Number of consumed tokens: ", response.total_tokens)
print("Returned content: ", response.content)
# Reset the default template
apicall.default_prompt = None
Example 3, continue chatting based on the last response:
# first call
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse() # update chat history, default is True
print(resp.content)
# continue chatting
chat.user("How are you?")
next_resp = chat.getresponse()
print(next_resp.content)
# fake response
chat.user("What's your name?")
chat.assistant("My name is GPT-3.5.")
# get the last result
print(chat[-1])
# save chat history
chat.save("chat_history.log", mode="w") # default to "a"
# print chat history
chat.print_log()
Moreover, you can check the usage status of the API key:
# show usage status of the default API key
chat = Chat()
chat.show_usage_status()
# show usage status of the specified API key
chat.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chat.show_usage_status()
Advance usage
Save the chat history to a file:
checkpoint = "tmp.log"
# chat 1
chat = Chat()
chat.save(checkpoint, mode="w") # default to "a"
# chat 2
chat = Chat("hello!")
chat.save(checkpoint)
# chat 3
chat.assistant("你好, how can I assist you today?")
chat.save(checkpoint)
Load the chat history from a file:
# load chats(default)
chats = load_chats(checkpoint)
assert chats == [Chat(log) for log in chat_logs]
# load chat log only
chat_logs = load_chats(checkpoint, chat_log_only=True)
assert chat_logs == [[], [{'role': 'user', 'content': 'hello!'}],
[{'role': 'user', 'content': 'hello!'},
{'role': 'assistant', 'content': '你好, how can I assist you today?'}]]
# load the last message only
chat_msgs = load_chats(checkpoint, last_message_only=True)
assert chat_msgs == ["", "hello!", "你好, how can I assist you today?"]
License
This package is licensed under the MIT license. See the LICENSE file for more details.
update log
- Since version
0.2.0,Chattype is used to handle data - Since version
0.3.0, you can use different API Key to send requests. - Since version
0.4.0, this package is mantained by cubenlp.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openai_api_call-0.4.2.tar.gz.
File metadata
- Download URL: openai_api_call-0.4.2.tar.gz
- Upload date:
- Size: 12.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ebca0565b33a4d20c7c3855e08fa317bd31107282a302e3b2e43cf491f73f7a6
|
|
| MD5 |
8c465d4456a5498382cf97374ac3522d
|
|
| BLAKE2b-256 |
6caa9b4b52115ceddce83941b6c82c90c471eac15ab367c533947d499508ddbf
|
File details
Details for the file openai_api_call-0.4.2-py3-none-any.whl.
File metadata
- Download URL: openai_api_call-0.4.2-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0eca350aa2336adcd9b4548ad20bc41570262dd3d6a2cfffdfd2d1bb0161c53f
|
|
| MD5 |
c105aec376ee4e68ecf291ff45d74731
|
|
| BLAKE2b-256 |
a555d1f43b2bee2dbe7f823d64cc1d7d07bad5f4739f3b72e75cd9f58d293af3
|