Skip to main content

A thin wrapper around the OpenAI Python bindings for caching responses.

Project description

🍰 openai-python-cache

A thin wrapper around the OpenAI Python bindings for caching responses.

Motivation

I'm experimenting with a large-ish dataset locally that gets injected into GPT prompts. Responses are not perfect, and occassionally I have to tweak some of my data. This means that I'm making API calls for results that are okay, because it's iterating over the entire dataset.

This solves the issue by cache-ing OpenAI responses in a local SQLite3 database. Only ChatCompletion is supported at this time because it's the only API I use.

This is a quick and dirty solution. I'd go a level lower and inject this behaviour directly in the requestor, but I don't have time to figure that part out (yet?)!

Installation

# Using pip:
$ pip install openai-python-cache

# Using poetry:
$ poetry add openai-python-cache

Usage

import os
import openai
from openai_python_cache.api import ChatCompletion
from openai_python_cache.provider import Sqlite3CacheProvider

openai.api_key = os.environ.get("OPENAI_API_KEY")

# Create a cache provider
cache_provider = Sqlite3CacheProvider()

# Use the ChatCompletion class from `openai_python_cache`
completion = ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "user",
            "content": "Tell the world about the ChatGPT API in the style of a pirate.",
        }
    ],
    # Inject the cache provider. Requests will NOT be cached if this is not
    # provided!
    cache_provider=cache_provider,
)

print(completion)

Demo

import os
import time
import openai
from openai_python_cache.api import ChatCompletion
from openai_python_cache.provider import Sqlite3CacheProvider

openai.api_key = os.environ.get("OPENAI_API_KEY")

cache_provider = Sqlite3CacheProvider()

params = {
    'model': "gpt-3.5-turbo",
    'messages': [
        {
            "role": "user",
            "content": "Testing cache!",
        }
    ]
}

# First request, cache miss. This will result in an API call to OpenAI, and
# the response will be saved to cache.
c0start = time.time()
ChatCompletion.create(cache_provider, **params)
c0end = time.time() - c0start
print(f"First request is a cache miss. It took {c0end} seconds!")
# >>> First request is a cache miss. It took 1.7009928226470947 seconds!

# Second request, cache hit. This will NOT result in an API call to OpenAI.
# The response will be served from cache.
c1start = time.time()
ChatCompletion.create(cache_provider, **params)
c1end = time.time() - c1start
print(f"Second request is a cache hit. It took {c1end} seconds!")
# >>> Second request is a cache hit. It took 0.00015616416931152344 seconds!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_python_cache-0.5.0.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_python_cache-0.5.0-py3-none-any.whl (4.7 kB view details)

Uploaded Python 3

File details

Details for the file openai_python_cache-0.5.0.tar.gz.

File metadata

  • Download URL: openai_python_cache-0.5.0.tar.gz
  • Upload date:
  • Size: 4.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.2 CPython/3.11.1 Darwin/22.3.0

File hashes

Hashes for openai_python_cache-0.5.0.tar.gz
Algorithm Hash digest
SHA256 8452705777a58364b80e57d4bf44e9158096d8982c3a55ddc9d89589e9fa859f
MD5 e66175d8fbbbb180f475a243bc806cee
BLAKE2b-256 fc660b289f24705a267763d82b7b19e54d91b9c397261fdcd9da193cd81bfaa6

See more details on using hashes here.

File details

Details for the file openai_python_cache-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_python_cache-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4763b9983e617809f59f24eff4e0ee6b6488f5d9c1c6953476619035805db7ea
MD5 badde9161fd1235527b23b33ced7f86b
BLAKE2b-256 c8a829aece884db2fa749136910a41d076f03c84dfdde2f4a9422b2fbccf5cb3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page