Skip to main content

Idiomatic way to build chatgpt apps using async generators in python

Project description

turbo-chat

Idiomatic way to build chatgpt apps using async generators in python

The ChatGPT API uses a new input format called chatml. In openai's python client, the format is used something like this:

messages = [
    {"role": "system", "content": "Greet the user!"},
    {"role": "user", "content": "Hello world!"},
]

The idea here is to incrementally build the messages using an async generator and then use that to generate completions. Async generators are incredibly versatile and simple abstraction for doing this kind of stuff. They can also be composed together very easily.

# Equivalent turbo-chat generator
async def example():
    yield System(content="Greet the user!")
    yield User(content="Hello World!")

    # To run generation, just yield Generate(),
    # the lib will take care of correctly running the app, and
    # return the value back here.
    output = yield Generate()
    print(output.content)

See more detailed example below.

Installation

pip install turbo-chat

Example

from typing import AsyncGenerator

from turbo_chat import (
    turbo,
    System,
    User,
    Assistant,
    GetUserInput,
    Generate,
    run,
)

# Get user
async def get_user(id):
    return {"zodiac": "pisces"}

# Set user zodiac mixin
@turbo()
async def set_user_zodiac(context: dict):

    user_id: int = context["user_id"]
    user_data: dict = await get_user(user_id)
    zodiac: str = user_data["zodiac"]

    yield User(content=f"My zodiac sign is {context['zodiac']}")


# Horoscope app
@turbo()
async def horoscope(context: dict):

    yield System(content="You are a fortune teller")

    async for (output, _) in run(set_user_zodiac()):
        yield output

    # Prompt runner to ask for user input
    input = yield GetUserInput(message="What do you want to know?")

    # Yield the input
    yield User(content=input)

    # Generate (overriding the temperature)
    value = yield Generate(settings={"temperature": 0.9})


# Let's run this
app: AsyncGenerator[Assistant | GetUserInput, str] = horoscope({"user_id": 1})

_input = None
while True:
    result, done = await run(app, _input)

    if isinstance(result, GetUserInput):
        _input = raw_input(result.message)
        continue

    if isinstance(result, Assistant):
        print(result.content)

    if done:
        break

You can also customize how the messages are persisted in-between the executions.

from turbo_chat import turbo, BaseMemory

class RedisMemory(BaseMemory):
    """Implement BaseMemory methods here"""

    async def init(self, context) -> None:
        ...

    async def append(self, item) -> None:
        ...

    async def clear(self) -> None:
        ...


# Now use the memory in a turbo_chat app
@turbo(memory=RedisMemory())
async def app():
    ...

turbo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

turbo_chat-0.1.3.tar.gz (6.3 kB view hashes)

Uploaded Source

Built Distribution

turbo_chat-0.1.3-py3-none-any.whl (6.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page