Idiomatic way to build chatgpt apps using async generators in python
Project description
turbo-chat
Idiomatic way to build chatgpt apps using async generators in python
About
The ChatGPT API uses a new input format called chatml. In openai's python client, the format is used something like this:
messages = [
{"role": "system", "content": "Greet the user!"},
{"role": "user", "content": "Hello world!"},
]
The idea here is to incrementally build the messages using an async generator and then use that to generate completions. Async generators are incredibly versatile and simple abstraction for doing this kind of stuff. They can also be composed together very easily.
# Equivalent turbo-chat generator
async def example():
yield System(content="Greet the user!")
yield User(content="Hello World!")
# To run generation, just yield Generate(),
# the lib will take care of correctly running the app, and
# return the value back here.
output = yield Generate()
print(output.content)
See more detailed example below.
Installation
pip install turbo-chat
Example
from typing import AsyncGenerator, Union
from turbo_chat import (
turbo,
System,
User,
Assistant,
GetInput,
Generate,
run,
)
# Get user
async def get_user(id):
return {"zodiac": "pisces"}
# Set user zodiac mixin
# Notice that no `@turbo()` decorator used here
async def set_user_zodiac(user_id: int):
user_data: dict = await get_user(user_id)
zodiac: str = user_data["zodiac"]
yield User(content=f"My zodiac sign is {zodiac}")
# Horoscope app
@turbo(temperature=0.0)
async def horoscope(user_id: int):
yield System(content="You are a fortune teller")
# Yield from mixin
async for output in set_user_zodiac(user_id):
yield output
# Prompt runner to ask for user input
input = yield GetInput(message="What do you want to know?")
# Yield the input
yield User(content=input)
# Generate (overriding the temperature)
value = yield Generate(temperature=0.9)
# Let's run this
app: AsyncGenerator[Union[Assistant, GetInput], str] = horoscope({"user_id": 1})
_input = None
while not (result := await (app.run(_input)).done:
if result.needs_input:
# Prompt user with the input message
_input = input(result.content)
continue
print(result.content)
# Output
# >>> What do you want to know? Tell me my fortune
# >>> As an AI language model, I cannot predict the future or provide supernatural fortune-telling. However, I can offer guidance and advice based on your current situation and past experiences. Is there anything specific you would like me to help you with?
#
Custom memory
You can also customize how the messages are persisted in-between the executions.
from turbo_chat import turbo, BaseMemory
class RedisMemory(BaseMemory):
"""Implement BaseMemory methods here"""
async def setup(self, **kwargs) -> None:
...
async def append(self, item) -> None:
...
async def clear(self) -> None:
...
# Now use the memory in a turbo_chat app
@turbo(memory_class=RedisMemory)
async def app():
...
Get access to memory object directly (just declare an additional param)
@turbo()
async def app(some_param: Any, memory: BaseMemory):
messages = await memory.get()
...
Generate a response to use internally but don't yield downstream
@turbo()
async def example():
yield System(content="You are a good guy named John")
yield User(content="What is your name?")
result = yield Generate(forward=False)
yield User(content="How are you doing?")
result = yield Generate()
b = example()
results = [output async for output in b]
assert len(results) == 1
Add a simple in-memory cache
You can also subclass the BaseCache
class to create a custom cache.
cache = SimpleCache()
@turbo(cache=cache)
async def example():
yield System(content="You are a good guy named John")
yield User(content="What is your name?")
result = yield Generate()
b = example()
results = [output async for output in b]
assert len(cache.cache) == 1
Latest Changes
- fix: Fix cache not saving to memory. PR #63 by @creatorrr.
- version: 0.3.11. PR #62 by @creatorrr.
- fix: Fix truncation. PR #61 by @creatorrr.
- version: 0.3.10. PR #60 by @creatorrr.
- version: 0.3.9. PR #59 by @creatorrr.
- x/fix cache args. PR #58 by @creatorrr.
- version: 0.3.7. PR #57 by @creatorrr.
- f/support multi choice. PR #56 by @creatorrr.
- feat: Support multiple choices selector; n > 1. PR #55 by @creatorrr.
- feat: Support positional arguments for running apps. PR #54 by @creatorrr.
- feat: Make get_encoding faster. PR #53 by @creatorrr.
- feat: Add ttl support to redis_cache. PR #52 by @creatorrr.
- feat: Support for parsing completions. PR #51 by @creatorrr.
- version: 0.3.6. PR #50 by @creatorrr.
- feat: Add RedisCache implementation. PR #49 by @creatorrr.
- fix: Fix json array. PR #48 by @creatorrr.
- version: 0.3.5. PR #47 by @creatorrr.
- fix: Fix how function signature and docstring was being parsed. PR #46 by @creatorrr.
- version: 0.3.4. PR #45 by @creatorrr.
- feat: Add @completion decorator. PR #44 by @creatorrr.
- version: 0.3.3. PR #43 by @creatorrr.
- feat: Memory expects memory_args, Assistant no longer yields automatically. PR #42 by @creatorrr.
- v/0.3.2. PR #41 by @creatorrr.
- version: 0.3.1. PR #40 by @creatorrr.
- fix: Fix scratchpad parsing. PR #39 by @creatorrr.
- version: 0.3.0. PR #38 by @creatorrr.
- x/more toolbot fixes. PR #37 by @creatorrr.
- version: 0.2.13. PR #36 by @creatorrr.
- v/0.2.12. PR #35 by @creatorrr.
- fix: toolbot additional_info parameter. PR #34 by @creatorrr.
- version: 0.2.11. PR #33 by @creatorrr.
- f/json tool bot. PR #32 by @creatorrr.
- version: 0.2.10. PR #31 by @creatorrr.
- feat: Add sticky messages. PR #30 by @creatorrr.
- version: 0.2.9. PR #29 by @creatorrr.
- feat: Add .init method to TurboGenWrapper. PR #28 by @creatorrr.
- doc: Generate FAQ using autodoc. PR #27 by @creatorrr.
- version: 0.2.8. PR #26 by @creatorrr.
- feat: Add .run() method to the TurboGen object. PR #25 by @creatorrr.
- version: 0.2.7. PR #24 by @creatorrr.
- feat: self-ask bot. PR #23 by @creatorrr.
- version: 0.2.6. PR #22 by @creatorrr.
- feat: Add summary memory. PR #21 by @creatorrr.
- version: 0.2.5. PR #20 by @creatorrr.
- version: 0.2.4. PR #19 by @creatorrr.
- refactor: Move trucation logic to a separate memory class. PR #18 by @creatorrr.
- version: 0.2.3. PR #17 by @creatorrr.
- f/memory improvements. PR #16 by @creatorrr.
- version: 0.2.2. PR #15 by @creatorrr.
- f/tool bot. PR #14 by @creatorrr.
- v/0.2.1. PR #13 by @creatorrr.
- feat: Add count_tokens. PR #12 by @creatorrr.
- Update README.md. PR #11 by @creatorrr.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file turbo_ai-0.3.12.tar.gz
.
File metadata
- Download URL: turbo_ai-0.3.12.tar.gz
- Upload date:
- Size: 803.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.11 Linux/5.15.0-71-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9b4ad64dc14a803f492c5f661103d68ef3200ade6658c3924923a5cf9b9f7cd0 |
|
MD5 | 4aed76cff9693d16476e3b8cd59e434e |
|
BLAKE2b-256 | 797ce4324016de77c11c7eda8df229f266a00601e0cda5118c8a8df5206d805e |
File details
Details for the file turbo_ai-0.3.12-py3-none-any.whl
.
File metadata
- Download URL: turbo_ai-0.3.12-py3-none-any.whl
- Upload date:
- Size: 813.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.11 Linux/5.15.0-71-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 89a5a8e458eb91947c3b8cffa60d16e3886b4d52c0430a96b797e7e20c771914 |
|
MD5 | e55a8be9aa8830b00739d06381221977 |
|
BLAKE2b-256 | d1bf7e217272a4e42e1415f0d57176f2cdf650ace55294fa2fc5b0617a9efb3e |