Skip to main content

A short wrapper of the OpenAI api call.

Project description

中文文档移步这里

Openai API call

PyPI version Tests Documentation Status

A simple wrapper for OpenAI API, which can be used to send requests and get responses.

Installation

pip install openai-api-call --upgrade

Usage

Set API Key

import openai_api_call as apicall
apicall.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Or set OPENAI_API_KEY in ~/.bashrc to avoid setting the API key every time:

# Add the following code to ~/.bashrc
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Set Proxy (Optional)

from openai_api_call import proxy_on, proxy_off, proxy_status
# Check the current proxy
proxy_status()

# Set proxy(example)
proxy_on(http="127.0.0.1:7890", https="127.0.0.1:7890")

# Check the updated proxy
proxy_status()

# Turn off proxy
proxy_off() 

Alternatively, you can use a proxy URL to send requests from restricted network, as shown below:

from openai_api_call import request

# set request url
alt_url = "https://api.example.com/v1/chat/completions"
request.url = alt_url

Basic Usage

Example 1, send prompt and return response:

from openai_api_call import Chat, show_apikey

# Check if API key is set
show_apikey()

# Check if proxy is enabled
proxy_status()

# Send prompt and return response
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse(update=False) # Not update the chat history, default to True

Example 2, customize the message template and return the information and the number of consumed tokens:

import openai_api_call as apicall

# Customize the sending template
apicall.default_prompt = lambda msg: [
    {"role": "system", "content": "帮我翻译这段文字"},
    {"role": "user", "content": msg}
]
chat = Chat("Hello!")
# Set the number of retries to Inf
# The timeout for each request is 10 seconds
response = chat.getresponse(temperature=0.5, max_requests=-1, timeout=10)
print("Number of consumed tokens: ", response.total_tokens)
print("Returned content: ", response.content)

Advanced Usage

Continue chatting based on the last response:

# first call
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse() # update chat history, default is True
print(resp.content)

# continue chatting
chat.user("How are you?")
next_resp = chat.getresponse()
print(next_resp.content)

# fake response
chat.user("What's your name?")
chat.assistant("My name is GPT-3.5.")

# get the last result
print(chat[-1])

# print chat history
chat.print_log()

License

This package is licensed under the MIT license. See the LICENSE file for more details.

update log

  • Since version 0.2.0, Chat type is used to handle data
  • Since version 0.3.0, you can use different API Key to send requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_api_call-0.3.4.tar.gz (8.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_api_call-0.3.4-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file openai_api_call-0.3.4.tar.gz.

File metadata

  • Download URL: openai_api_call-0.3.4.tar.gz
  • Upload date:
  • Size: 8.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.16

File hashes

Hashes for openai_api_call-0.3.4.tar.gz
Algorithm Hash digest
SHA256 de14357b45e1b23a438edc94a9dd74f6db21100cc8bb562abc0844a1067d3a56
MD5 863a3b1473bc39c4358d46fafcbacc3c
BLAKE2b-256 987d185f3a70170cb340e41d43cdeb32daa92d26897ef180b642fa8c65768a9a

See more details on using hashes here.

File details

Details for the file openai_api_call-0.3.4-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_api_call-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 27d8dd3f8a1f2b944edb298c4e037130f183ea43c320a8f6ad02a2385c345fcc
MD5 a418ff53843680ceefbed7b180b54f4c
BLAKE2b-256 16e4623523d5dd5e41491c4eabddc135d4c7b95d92cb4150e00f3d15a66391ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page