Unofficial Async Python client library for the OpenAI API
Project description
async-openai
Unofficial Async Python client library for the OpenAI API based on Documented Specs
Features
-
Asyncio based with Sync and Async Support with
httpx
-
Supports all API endpoints
-
Strongly typed validation of requests and responses with
Pydantic
Models with transparent access to the raw response and object-based results. -
Handles Retries automatically through
backoff
-
Supports Local and Remote Cloud Object Storage File Handling Asyncronously through
file-io
-
Supports
S3
:s3://bucket/path/to/file.txt
-
Supports
GCS
:gs://bucket/path/to/file.txt
-
Supports
Minio
:minio://bucket/path/to/file.txt
-
-
Supports
limited
cost tracking forCompletions
andEdits
requests (when stream is not enabled)
Installation
# Install from stable
pip install async-openai
# Install from dev/latest
pip install git+https://github.com/GrowthEngineAI/async-openai.git
Quick Usage
import asyncio
from async_openai import OpenAI, settings, CompletionResponse
# Environment variables should pick up the defaults
# however, you can also set them explicitly.
# `api_key` - Your OpenAI API key. Env: [`OPENAI_API_KEY`]
# `url` - The URL of the OpenAI API. Env: [`OPENAI_URL`]
# `api_type` - The OpenAI API type. Env: [`OPENAI_API_TYPE`]
# `api_version` - The OpenAI API version. Env: [`OPENAI_API_VERSION`]
# `organization` - The OpenAI organization. Env: [`OPENAI_ORGANIZATION`]
# `proxies` - A dictionary of proxies to be used. Env: [`OPENAI_PROXIES`]
# `timeout` - The timeout in seconds to be used. Env: [`OPENAI_TIMEOUT`]
# `max_retries` - The number of retries to be used. Env: [`OPENAI_MAX_RETRIES`]
OpenAI.configure(
api_key = "sk-XXXX",
organization = "org-XXXX",
debug_enabled = False,
)
# Alternatively you can configure the settings through environment variables
# settings.configure(
# api_key = "sk-XXXX",
# organization = "org-XXXX",
# )
# [Sync] create a completion
# Results return a CompletionResult object
result: CompletionResponse = OpenAI.completions.create(
prompt = 'say this is a test',
max_tokens = 4,
stream = True
)
# print the completion text
# which are concatenated together from the result['choices'][n]['text']
print(result.text)
# print the number of choices returned
print(len(result))
# get the cost consumption for the request
print(result.consumption)
# [Async] create a completion
# All async methods are generally prefixed with `async_`
result: CompletionResponse = asyncio.run(
OpenAI.completions.async_create(
prompt = 'say this is a test',
max_tokens = 4,
stream = True
)
)
Dependencies
The aim of this library is to be as lightweight as possible. It is built on top of the following libraries:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for async_openai-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dfdd26ab618062f36cd16ef97300f0a473fe2ae42c32415a52d7ac72616cf145 |
|
MD5 | 54944c182ba0430712c174ac853e3c83 |
|
BLAKE2b-256 | 344104083eb36f7d9580ca21564c6a79aa8bfdfbccd9fc32ac85abca059f9c54 |