No project description provided
Project description
AgenTools - Async Generator Tools for LLMs
A simple set of modules, wrappers and utils that are essential for LLM-based assistants and agents using the OpenAI API and function tools. It is useful for:
- OpenAI API: Simple wrapper for the OpenAI API to provide mocked endpoints for easy testing without costing money, accumulating the delta chunks from streamed responses into partial responses, and easier token counting/tracking.
- Function Tools: Easily convert any (async) python function into a function tool that the LLM model can call, with automatic validation and retrying with error messages.
- Structured Data: Easily define a Pydantic model that can be generated by the LLM model, also with validation and retries.
- Assistants: Event-based architecture with async generators that yield events that you can iterate through and handle only the events you care about, such as whether you want to stream the response or not, cancel the generation prematurely, or wait for user input (human-in-the-loop) before continuing, etc.
- Copilots: Integrate right into an editor with stateful system messages to allow the copilot to see the latest state of the editor and function tools to interact with the editor.
Yet to come:
- Agents: Autoprompting, self-prompting, chain-of-thought, sketchpads, memory management, planning, and more.
- Multi-Agents: Communication channels, organization structuring, and more.
Quick Start
Installation
pip install agentools
Assistant and ChatGPT
A high-level interface to use ChatGPT or other LLM-based assistants! The default implementation of ChatGPT has:
- a message history to remember the conversation so far (including the system prompt)
- ability to use tools
- efficient async streaming support
- simple way to customize/extend/override the default behavior
from agentools import *
# empty chat history and default model (gpt-3.5)
model = ChatGPT()
You can then simply call the model as if it was a function, with a prompt:
await model("Hey!")
'Hello! How can I assist you today?'
As you can see, the model is async and it simply returns the resonse as a string.
Both your prompt and the response are stored in the history, so you can keep calling the model with new prompts and it will remember the conversation so far.
await model("Can you repeat my last message please?")
'Of course! You said, "Hey!"'
model.messages.history
[{'role': 'user', 'content': 'Hey!'},
{'content': 'Hello! How can I assist you today?', 'role': 'assistant'},
{'role': 'user', 'content': 'Can you repeat my last message please?'},
{'content': 'Of course! You said, "Hey!"', 'role': 'assistant'}]
System prompt and more on MessageHistory
Notice that our model has no system prompt in the beginning. ChatGPT
's constructor by default creates an empty chat history, but you can explicitly create a MessageHistory
object and pass it to the constructor:
translate = ChatGPT(
messages=SimpleHistory.system("Translate the user message to English")
)
# SimpleHistory.system(s) is just shorthand for SimpleHistory([msg(system=s)])
print(await translate("Ich liebe Katzen!"))
print(await translate("고양이랑 강아지 둘다 좋아!"))
I love cats!
I like both cats and dogs!
translate.messages.history
[{'role': 'system', 'content': 'Translate the user message to English'},
{'role': 'user', 'content': 'Ich liebe Katzen!'},
{'content': 'I love cats!', 'role': 'assistant'},
{'role': 'user', 'content': '고양이랑 강아지 둘다 좋아!'},
{'content': 'I like both cats and dogs!', 'role': 'assistant'}]
Notice that here, we're wasting tokens by remembering the chat history, since it's not really a conversation. There's a simple GPT
class, which simply resets the message history after each prompt:
translate = GPT(messages=SimpleHistory.system("Translate the user message to English"))
await translate("Ich liebe Katzen!")
await translate("고양이랑 강아지 둘다 좋아!")
translate.messages.history
[{'role': 'system', 'content': 'Translate the user message to English'}]
OpenAI API: changing the model and mocked API
You can set the default model in the constructor, or override it for each prompt:
# default model is now gpt-4 💸
model = ChatGPT(model="gpt-4")
# but you can override it for each prompt anyways
await model("Heyo!", model="mocked")
'Hello, world!'
As you see, our wrapper provides a simple mocked "model", which will simply return "Hello, world!"
for any prompt, with some simulated latency. This will also work with streaming responses, and in either cases, you won't be able to tell the difference between the real API and the mocked one.
There are more mocked models for your convinience:
mocked
: always returns"Hello, world!"
mocked:TEST123
: returns the string after the colon, e.g."TEST123"
echo
: returns the user prompt itself
Let's print all events to the console to take a peek at the event-based generator:
await model("Heya!", model="echo", event_logger=print)
[ResponseStartEvent]: prompt=Heya!, tools=None, model=echo, max_function_calls=100, openai_kwargs={}
[CompletionStartEvent]: call_index=0
[CompletionEvent]: completion=ChatCompletion(id='mock', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None))], created=1721161834, model='mock', object='chat.completion', service_tier=None, system_fingerprint=None, usage=None), call_index=0
[FullMessageEvent]: message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None), choice_index=0
[TextMessageEvent]: content=Heya!
[ResponseEndEvent]: content=Heya!
'Heya!'
Wow, quite a lot going on for a simple prompt! While it might seem like too many events, this offers a lot of flexibility and customizability.
You can easily handle only the events you are interested in, useful when e.g:
- updating the frontend when streaming the responses,
- cancelling the generation early,
- or implementing human-in-the-loop for function calls.
For instance, the GPT
class from above is as simple as:
async for event in self.response_events(prompt, **openai_kwargs):
match event:
case self.ResponseEndEvent():
await self.messages.reset()
return event.content
This generator-based architecture is a good balance between flexibility and simplicity!
While we won't go deeper into the low-level API in this quickstart, you can look at the advanced.ipynb
notebook for more details.
Tools: @function_tool
You can turn any function into a tool usable by the model by decorating it with @function_tool
:
@function_tool
def print_to_console(text: str) -> str:
"""
Print text to console
Args:
text: text to print
"""
print(text)
return "success" # the model will see the return value
# normal call
print_to_console("Hello from python!")
Hello from python!
'success'
You can use the tool from python as you normally would, and the model will also be able to use it simply by passing it to the tools
parameter during init (as default) or prompting it (as a one-off).
model = ChatGPT(tools=print_to_console)
await model("Say 'hello from GPT' to console!")
hello from GPT
'The message "hello from GPT" has been successfully printed to the console.'
To make the function a @function_tool
, you must do the following:
- The parameters must be type annotated, and all parameters must be JSON-serializable (e.g.
str
,int
,float
,bool
,list
,dict
,None
, etc). - The return type should be a
str
or something that can be converted to astr
. - It must be documented with a
'''docstring'''
, including each parameter (most formats supported, e.g. Google-style, NumPy-style, sphinx-style, etc, see this overview)
Showing off some more goodies:
- Even async functions should seamlessly work, just don't forget to
await
them. @fail_with_message(err)
is a decorator that will catch any exceptions thrown by the function and instead return the error message. This is useful for when you want to handle errors in a more graceful way than just crashing the model. It also takes an optional logger, which by default takes theprint
function, but any callable that takes a string will work, such aslogger.error
from thelogging
module.- Usually, the
@function_tool
decorator will throw an assertion error if you forget to provide the description for any of the function or their parameters. If you really don't want to provide descriptions for some (or all), maybe because it's so self-explanatory or you need to save tokens, then you can explicitly turn off the docstring parsing by passing@function_tool(check_description=False)
. This is not recommended, but it's there if you need it.
Note that by returning descriptive error strings, the model can read the error message and retry, increasing the robustness!
import asyncio
import logging
@function_tool(name="Fibonacci", require_doc=False)
@fail_with_message("Error", logger=logging.error)
async def fib(n: int):
if n < 0:
raise ValueError("n must be >= 0")
if n < 2:
return n
await asyncio.sleep(0.1)
return sum(await asyncio.gather(fib(n - 1), fib(n - 2)))
await fib(-10)
ERROR:root:Tool call fib(-10) failed: n must be >= 0
'Error: n must be >= 0'
Toolkits: class Toolkit
Toolkits are a collection of related function tools, esp. useful when they share a state. Also good for keeping the state bound to a single instance of the toolkit, rather than a global state.
To create a toolkit, simply subclass Toolkit
and decorate its methods with @function_tool
.
class Notepad(Toolkit):
def __init__(self):
super().__init__()
self.content = "<Fill me in>"
@function_tool
def write(self, text: str):
"""
Write text to the notepad
Args:
text: The text to write
"""
self.content = text
@function_tool(require_doc=False)
def read(self):
return self.content
notes = Notepad()
notes.write("Shhh... here's a secret: 42")
notes.read()
"Shhh... here's a secret: 42"
As before, simply pass the toolkit to the model. To use multiple tools and toolkits, simply put them in a list:
model = ChatGPT(
tools=[notes, print_to_console, fib],
)
await model("What's on my notepad?")
'On your notepad, it says: "Shhh... here's a secret: 42"'
await model(
"Can you calculate the 8th fibonacci number, add it to the number in my notes, and write it? also print it to console as well.",
event_logger=lambda x: print(x) if x.startswith("[Tool") else None,
parallel_tool_calls=False,
)
[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_wxaisBbFMYRa0XNcTnP9MH1b', function=Function(arguments='{"n":8}', name='Fibonacci'), type='function')]
[ToolResultEvent]: result=21, tool_call=ChatCompletionMessageToolCall(id='call_wxaisBbFMYRa0XNcTnP9MH1b', function=Function(arguments='{"n":8}', name='Fibonacci'), type='function'), index=0
[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_gt5ZnA5v2VJL5R2gyPeHRN0a', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63"}', name='write'), type='function')]
[ToolResultEvent]: result=None, tool_call=ChatCompletionMessageToolCall(id='call_gt5ZnA5v2VJL5R2gyPeHRN0a', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63"}', name='write'), type='function'), index=0
[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_ErYx6g7gpVTnLsqg59oxHI9C', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63"}', name='print_to_console'), type='function')]
The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63
[ToolResultEvent]: result=success, tool_call=ChatCompletionMessageToolCall(id='call_ErYx6g7gpVTnLsqg59oxHI9C', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63"}', name='print_to_console'), type='function'), index=0
'I have written on your notepad. The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63. I have also printed it to the console.'
notes.read()
'The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63'
Notice how since our write
function doesn't return anything, it defaults to None
and our model gets confused! So don't forget to return an encouraging success message to make our model happy :)
Tool Previews
When using streaming, and you're using function tools with a long input, you might want to preview the tool's output before it's fully processed. With the help of the json_autocomplete
package, the JSON argument generated by the model can be parsed before it's fully complete, and the preview can be shown to the user.
@function_tool(require_doc=False)
async def create_slogan(title: str, content: str):
print(f"\n\n[Final Slogan] {title}: {content}")
return "Slogan created and shown to user! Simply tell the user that it was created."
@create_slogan.preview
async def preview(title: str = "", content: str = ""):
assert isinstance(title, str) and isinstance(content, str)
print(f"[Preview] {title}: {content}", flush=True)
model = ChatGPT(tools=create_slogan)
await model(
"Create a 1-sentence slogan about how ducks can help with debugging.", stream=True
)
[Preview] :
[Preview] D:
[Preview] Ducks:
[Preview] Ducks and:
[Preview] Ducks and Debug:
[Preview] Ducks and Debugging:
[Preview] Ducks and Debugging:
[Preview] Ducks and Debugging: Qu
[Preview] Ducks and Debugging: Quack
[Preview] Ducks and Debugging: Quack your
[Preview] Ducks and Debugging: Quack your code
[Preview] Ducks and Debugging: Quack your code bugs
[Preview] Ducks and Debugging: Quack your code bugs away
[Preview] Ducks and Debugging: Quack your code bugs away with
[Preview] Ducks and Debugging: Quack your code bugs away with the
[Preview] Ducks and Debugging: Quack your code bugs away with the help
[Preview] Ducks and Debugging: Quack your code bugs away with the help of
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.
[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.
[Final Slogan] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.
'I have created a slogan about how ducks can help with debugging!'
If you need a more coherent logic shared between the @preview
and the final @function_tool
, e.g. do something at the start of the function call, share some data between previews, etc... It gets messy very fast!
Instead, you can use the @streaming_function_tool()
decorator, which receives a single arg_stream
parameter, which is an async generator that yields the partial arguments, as streamed from the model. Therefore, you simply need to iterate through it, and perform the actual function call at the end of the iteration. The following is the equivalent of the previous example:
Note that currently, you must pass the parameter as a
schema
(either JSON Schema or Pydantic BaseModel).
from pydantic import BaseModel, Field
class Slogan(BaseModel):
"""A slogan for a product"""
title: str = Field(description="MUST BE EXACTLY 3 WORDS!")
content: str = Field(description="less than 10 words")
@streaming_function_tool(schema=Slogan)
async def create_slogan(arg_stream):
print("Starting slogan creation...")
async for args in arg_stream:
title, content = args.get("title", ""), args.get("content", "")
print(f'{args} -> "{title}", "{content}"', flush=True)
print(f"\n\n[Final Slogan] {title}: {content}")
return "Slogan created and shown to user! Simply tell the user that it was created."
model = ChatGPT(tools=create_slogan)
await model(
"Create a 1-sentence slogan about how ducks can help with debugging.", stream=True
)
Starting slogan creation...
{'': None} -> "", ""
{'title': None} -> "None", ""
{'title': ''} -> "", ""
{'title': 'Debug'} -> "Debug", ""
{'title': 'Debugging'} -> "Debugging", ""
{'title': 'Debugging Ducks'} -> "Debugging Ducks", ""
{'title': 'Debugging Ducks', '': None} -> "Debugging Ducks", ""
{'title': 'Debugging Ducks', 'content': None} -> "Debugging Ducks", "None"
{'title': 'Debugging Ducks', 'content': ''} -> "Debugging Ducks", ""
{'title': 'Debugging Ducks', 'content': 'Qu'} -> "Debugging Ducks", "Qu"
{'title': 'Debugging Ducks', 'content': 'Quack'} -> "Debugging Ducks", "Quack"
{'title': 'Debugging Ducks', 'content': 'Quack through'} -> "Debugging Ducks", "Quack through"
{'title': 'Debugging Ducks', 'content': 'Quack through errors'} -> "Debugging Ducks", "Quack through errors"
{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly'} -> "Debugging Ducks", "Quack through errors effortlessly"
{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> "Debugging Ducks", "Quack through errors effortlessly."
{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> "Debugging Ducks", "Quack through errors effortlessly."
{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> "Debugging Ducks", "Quack through errors effortlessly."
[Final Slogan] Debugging Ducks: Quack through errors effortlessly.
'I have created a slogan: "Debugging Ducks - Quack through errors effortlessly."'
Structured Data
We can very easily define a Pydantic model that can be generated by the LLM model, with validation and retries:
from enum import StrEnum
from pydantic import BaseModel, Field
class Language(StrEnum):
EN = "en"
DE = "de"
KO = "ko"
class Song(BaseModel):
title: str
genres: list[str] = Field(description="AT LEAST 3 genres!")
duration: float
language: Language
has_lyrics: bool
# normal use
Song(title="Hello", genres=["pop"], duration=3.5, language=Language.EN, has_lyrics=True)
Song(title='Hello', genres=['pop'], duration=3.5, language=<Language.EN: 'en'>, has_lyrics=True)
Create a StructGPT
object with your pydantic model, and prompting it will always return a valid instance of the model, or raise an exception if it fails to generate a valid instance after the maximum number of retries. Your docstring and field descriptions will also be visible to the model, so make sure to write good descriptions!
generate_song = StructGPT(Song)
await generate_song("Come up with an all-time best K-hiphop song")
Song(title='Eternal Sunshine', genres=['Hip-hop', 'R&B', 'K-pop'], duration=240.0, language=<Language.KO: 'ko'>, has_lyrics=True)
Misc.
Streaming can be enabled as usual by passing stream=True
when prompting, and handle the partial events as they come in. Check the Assistant
class for a list of events including the ones for streaming.
There are some other useful utilities in the utils
module, such as:
tokens
: for token countingtrackers
: for transparent token tracking and prompt/response logging
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file agentools-0.5.1.tar.gz
.
File metadata
- Download URL: agentools-0.5.1.tar.gz
- Upload date:
- Size: 32.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.4 Darwin/23.5.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d3a976cc17538eea6957df9d07522ec81edd6840ffc0488314bd9cf22c8aaa2e |
|
MD5 | d673d2b9a2cdfc0a4e4cd85b6ad9087c |
|
BLAKE2b-256 | dd3a48be52fb5cf16719ef7b99f18b93a799e2971c14524aea845e5993c4f411 |
File details
Details for the file agentools-0.5.1-py3-none-any.whl
.
File metadata
- Download URL: agentools-0.5.1-py3-none-any.whl
- Upload date:
- Size: 34.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.4 Darwin/23.5.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 493778769666b37cf2f3e5b13282a069d3805fcc98c1fd8fc15c344e495c0fc0 |
|
MD5 | a03271d30a9176e566c5053d6d25074e |
|
BLAKE2b-256 | fee1a2c97eae701bad35827e2b04469089279634f1c397a802917d8a6ff66a33 |