Skip to main content

Idiomatic way to build chatgpt apps using async generators in python

Project description

turbo-chat

Idiomatic way to build chatgpt apps using async generators in python

turbo

About

The ChatGPT API uses a new input format called chatml. In openai's python client, the format is used something like this:

messages = [
    {"role": "system", "content": "Greet the user!"},
    {"role": "user", "content": "Hello world!"},
]

The idea here is to incrementally build the messages using an async generator and then use that to generate completions. Async generators are incredibly versatile and simple abstraction for doing this kind of stuff. They can also be composed together very easily.

# Equivalent turbo-chat generator
async def example():
    yield System(content="Greet the user!")
    yield User(content="Hello World!")

    # To run generation, just yield Generate(),
    # the lib will take care of correctly running the app, and
    # return the value back here.
    output = yield Generate()
    print(output.content)

See more detailed example below.

Installation

pip install turbo-chat

Example

from typing import AsyncGenerator, Union

from turbo_chat import (
    turbo,
    System,
    User,
    Assistant,
    GetInput,
    Generate,
    run,
)

# Get user
async def get_user(id):
    return {"zodiac": "pisces"}

# Set user zodiac mixin
# Notice that no `@turbo()` decorator used here
async def set_user_zodiac(user_id: int):

    user_data: dict = await get_user(user_id)
    zodiac: str = user_data["zodiac"]

    yield User(content=f"My zodiac sign is {zodiac}")


# Horoscope app
@turbo(temperature=0.0)
async def horoscope(user_id: int):

    yield System(content="You are a fortune teller")

    # Yield from mixin
    async for output in set_user_zodiac(user_id):
        yield output

    # Prompt runner to ask for user input
    input = yield GetInput(message="What do you want to know?")

    # Yield the input
    yield User(content=input)

    # Generate (overriding the temperature)
    value = yield Generate(temperature=0.9)

# Let's run this
app: AsyncGenerator[Union[Assistant, GetInput], str] = horoscope({"user_id": 1})

_input = None
while not (result := await (app.run(_input)).done:
    if result.needs_input:
        # Prompt user with the input message
        _input = input(result.content)
        continue

    print(result.content)

# Output
# >>> What do you want to know? Tell me my fortune
# >>> As an AI language model, I cannot predict the future or provide supernatural fortune-telling. However, I can offer guidance and advice based on your current situation and past experiences. Is there anything specific you would like me to help you with?
#

Custom memory

You can also customize how the messages are persisted in-between the executions.

from turbo_chat import turbo, BaseMemory

class RedisMemory(BaseMemory):
    """Implement BaseMemory methods here"""

    async def setup(self, **kwargs) -> None:
        ...

    async def append(self, item) -> None:
        ...

    async def clear(self) -> None:
        ...


# Now use the memory in a turbo_chat app
@turbo(memory_class=RedisMemory)
async def app():
    ...

Get access to memory object directly (just declare an additional param)

@turbo()
async def app(some_param: Any, memory: BaseMemory):

    messages = await memory.get()
    ...

Generate a response to use internally but don't yield downstream

@turbo()
async def example():
    yield System(content="You are a good guy named John")
    yield User(content="What is your name?")
    result = yield Generate(forward=False)

    yield User(content="How are you doing?")
    result = yield Generate()

b = example()
results = [output async for output in b]

assert len(results) == 1

Add a simple in-memory cache

You can also subclass the BaseCache class to create a custom cache.

cache = SimpleCache()

@turbo(cache=cache)
async def example():
    yield System(content="You are a good guy named John")
    yield User(content="What is your name?")
    result = yield Generate()

b = example()
results = [output async for output in b]

assert len(cache.cache) == 1

Latest Changes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

turbo_ai-0.3.12.tar.gz (803.4 kB view details)

Uploaded Source

Built Distribution

turbo_ai-0.3.12-py3-none-any.whl (813.2 kB view details)

Uploaded Python 3

File details

Details for the file turbo_ai-0.3.12.tar.gz.

File metadata

  • Download URL: turbo_ai-0.3.12.tar.gz
  • Upload date:
  • Size: 803.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.11 Linux/5.15.0-71-generic

File hashes

Hashes for turbo_ai-0.3.12.tar.gz
Algorithm Hash digest
SHA256 9b4ad64dc14a803f492c5f661103d68ef3200ade6658c3924923a5cf9b9f7cd0
MD5 4aed76cff9693d16476e3b8cd59e434e
BLAKE2b-256 797ce4324016de77c11c7eda8df229f266a00601e0cda5118c8a8df5206d805e

See more details on using hashes here.

File details

Details for the file turbo_ai-0.3.12-py3-none-any.whl.

File metadata

  • Download URL: turbo_ai-0.3.12-py3-none-any.whl
  • Upload date:
  • Size: 813.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.11 Linux/5.15.0-71-generic

File hashes

Hashes for turbo_ai-0.3.12-py3-none-any.whl
Algorithm Hash digest
SHA256 89a5a8e458eb91947c3b8cffa60d16e3886b4d52c0430a96b797e7e20c771914
MD5 e55a8be9aa8830b00739d06381221977
BLAKE2b-256 d1bf7e217272a4e42e1415f0d57176f2cdf650ace55294fa2fc5b0617a9efb3e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page