Skip to main content

Wrapper library for openai to send events to the Imaginary Programming monitor

Project description

Imaginary Dev OpenAI wrapper

image

image

Documentation Status

Wrapper library for openai to send events to the Imaginary Programming monitor

Features

  • Patches the openai library to allow user to set an ip_api_key and ip_api_name for each request
  • Works out of the box with langchain

Get Started

OpenAI

At startup, before any openai calls, patch the library with the following code:

from im_openai import patch_openai
patch_openai()

Then, set the ip_api_key and ip_api_name for each request:

import openai

completion = openai.ChatCompletion.create(
    engine="davinci",
    prompt="Show me an emoji that matches the sport: soccer",
    ip_api_key="6a0ea966-8e4d-45ef-b7bf-9577ab73a60d",
    ip_api_name="sport-emoji",
    ip_template_params={"sport": "soccer"},
    ip_template_chat=[{"role": "user", "content": "Show me an emoji that matches the sport: {sport}" }]
)

Langchain

For langchain, you can directly patch, or use a context manager before setting up a chain:

Using a context manager: (recommended)

from im_openai.langchain import prompt_watch_tracing

with prompt_watch_tracing("emojification", "sport-emoji"):
    chain = LLMChain(llm=...)
    chain.run("Hello world", inputs={"name": "world"})

Patch directly:

from im_openai.langchain import prompt_watch_tracing

old_tracer = enable_prompt_watch_tracing("emojification", "sport-emoji",
    template_chat=[{"role": "user", "content": "Show me an emoji that matches the sport: {sport}" }])
chain = LLMChain(llm=...)
chain.run("Hello world", inputs={"name": "world"})

# optional, if you need to disable tracing later
disable_prompt_watch_tracing(old_tracer)

Additional Parameters

Each of the above APIs accept the same additional parameters. The OpenAI API requires a ip_ prefix for each parameter.

  • template_chat / ip_template_chat: The chat template to use for the request. This is a list of dictionaries with the following keys:

    • role: The role of the speaker. Either "system", "user" or "ai".
    • content: The content of the message. This can be a string or a template string with {} placeholders.

    For example:

    [
      {"role": "ai", "content": "Hello, I'm {system_name}!"},
      {"role": "user", "content": "Hi {system_name}, I'm {user_name}!"}
    ]
    

    To represent an array of chat messages, use the artificial role "chat_history" with content set to the variable name in substitution format: [{"role": "chat_history", "content": "{prev_messages}"}}]

  • template_text / ip_template_text: The text template to use for completion-style requests. This is a string or a template string with {} placeholders, e.g. "Hello, {user_name}!".

  • chat_id / ip_chat_id: The UUID of a "chat session" - if the chat API is being used in a conversational context, then the same chat id can be provided so that the events are grouped together, in order. If not provided, this will be left blank.

These parameters are only available in the patched OpenAI client:

  • ip_template_params: The parameters to use for template strings. This is a dictionary of key-value pairs. Note: This value is inferred in the Langchain wrapper.
  • ip_event_id: A unique UUID for a specific call. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from the run_id.
  • ip_parent_event_id: The UUID of the parent event. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from the parent_run_id.

Credits

This package was created with Cookiecutter* and the audreyr/cookiecutter-pypackage* project template.

.. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _audreyr/cookiecutter-pypackage: https://github.com/audreyr/cookiecutter-pypackage

======= History =======

0.1.0 (2023-06-20)

  • First release on PyPI.

0.1.1 (2023-06-23)

  • add TemplateString helper and support for data / params

0.1.2 (2023-06-23)

  • add support for original template too

0.2.0 (2023-06-26)

  • add explicit support for passing the "prompt template text"

0.3.0 (2023-06-28)

  • add support for chat templates (as objects instead of arrays)

0.4.0 (2023-06-29)

  • switch event reporting to be async / non-blocking

0.4.1 (2023-06-29)

  • add utility for formatting langchain messages

0.4.2 (2023-06-29)

  • remove stray breakpoint

0.4.3 (2023-06-30)

  • pass along chat_id
  • attempt to auto-convert langchain prompt templates

0.4.4 (2023-06-30)

  • remove stray prints

0.5.0 (2023-07-06)

  • Add langchain callbacks handlers

0.6.0 (2023-07-10)

  • Handle duplicate callbacks, agents, etc

0.6.1 (2023-07-12)

  • Fix prompt retrieval in deep chains

0.6.2 (2023-07-13)

  • Handle cases where input values are not strings

0.6.3 (2023-07-18)

  • Better support for server-generated event ids (pre-llm sends event, post-llm re-uses the same id)
  • more tests for different kinds of templates

0.6.4

  • include temporary patched version of loads()

0.7.0

  • breaking change: move im_openai.langchain_util to im_openai.langchain
  • add support for injecting callbacks into all langchain calls using tracing hooks

0.7.1

  • Pass along model params to the server

0.7.3

  • add explicit support for api_key

0.8.0

  • switch to api_key, pretend project_key isn't even a thing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

im_openai-0.8.0.tar.gz (25.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

im_openai-0.8.0-py2.py3-none-any.whl (16.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file im_openai-0.8.0.tar.gz.

File metadata

  • Download URL: im_openai-0.8.0.tar.gz
  • Upload date:
  • Size: 25.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for im_openai-0.8.0.tar.gz
Algorithm Hash digest
SHA256 faa32eb6777db64ab86eb4aa2300b3c11fe09049e5533103bcc209370b7078d6
MD5 9f7eece5794a5fe66aaf9d2333331ea0
BLAKE2b-256 acd4a14a839b8d217c3f4c228099e97a97bb92203123a4000694b0c002adb204

See more details on using hashes here.

File details

Details for the file im_openai-0.8.0-py2.py3-none-any.whl.

File metadata

  • Download URL: im_openai-0.8.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 16.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for im_openai-0.8.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 891a66fd69874cf4464f55a2df6daa949754db5ac9db20f002e596504896e05f
MD5 7eabbaf4e51f32db9cdf9937629a2482
BLAKE2b-256 2ef6227c76022aa7db45884bf8029505753b7f8db90d869e0c961891ccb276ac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page