Skip to main content

No project description provided

Project description

AgenTools - Async Generator Tools for LLMs

A simple set of modules, wrappers and utils that are essential for LLM-based assistants and agents using the OpenAI API and function tools. It is useful for:

  • OpenAI API: Simple wrapper for the OpenAI API to provide mocked endpoints for easy testing without costing money, accumulating the delta chunks from streamed responses into partial responses, and easier token counting/tracking.
  • Function Tools: Easily convert any (async) python function into a function tool that the LLM model can call, with automatic validation and retrying with error messages.
  • Structured Data: Easily define a Pydantic model that can be generated by the LLM model, also with validation and retries.
  • Assistants: Event-based architecture with async generators that yield events that you can iterate through and handle only the events you care about, such as whether you want to stream the response or not, cancel the generation prematurely, or wait for user input (human-in-the-loop) before continuing, etc.
  • Copilots: Integrate right into an editor with stateful system messages to allow the copilot to see the latest state of the editor and function tools to interact with the editor.

Yet to come:

  • Agents: Autoprompting, self-prompting, chain-of-thought, sketchpads, memory management, planning, and more.
  • Multi-Agents: Communication channels, organization structuring, and more.

Quick Start

Installation

pip install agentools

Assistant and ChatGPT

A high-level interface to use ChatGPT or other LLM-based assistants! The default implementation of ChatGPT has:

  • a message history to remember the conversation so far (including the system prompt)
  • ability to use tools
  • efficient async streaming support
  • simple way to customize/extend/override the default behavior
from agentools import *

# empty chat history and default model (gpt-3.5)
model = ChatGPT()

You can then simply call the model as if it was a function, with a prompt:

await model("Hey!")

'Hello! How can I assist you today?'

As you can see, the model is async and it simply returns the resonse as a string.

Both your prompt and the response are stored in the history, so you can keep calling the model with new prompts and it will remember the conversation so far.

await model("Can you repeat my last message please?")

'Of course! Your last message was "Hey!"'

model.messages.history

[{'role': 'user', 'content': 'Hey!'},
{'content': 'Hello! How can I assist you today?', 'role': 'assistant'},
{'role': 'user', 'content': 'Can you repeat my last message please?'},
{'content': 'Of course! Your last message was "Hey!"', 'role': 'assistant'}]

System prompt and more on MessageHistory

Notice that our model has no system prompt in the beginning. ChatGPT's constructor by default creates an empty chat history, but you can explicitly create a MessageHistory object and pass it to the constructor:

translate = ChatGPT(
    messages=SimpleHistory.system("Translate the user message to English")
)
# SimpleHistory.system(s) is just shorthand for SimpleHistory([msg(system=s)])

print(await translate("Ich liebe Katzen!"))
print(await translate("고양이랑 강아지 둘다 좋아!"))

I love cats!
I like both cats and dogs!

translate.messages.history

[{'role': 'system', 'content': 'Translate the user message to English'},
{'role': 'user', 'content': 'Ich liebe Katzen!'},
{'content': 'I love cats!', 'role': 'assistant'},
{'role': 'user', 'content': '고양이랑 강아지 둘다 좋아!'},
{'content': 'I like both cats and dogs!', 'role': 'assistant'}]

Notice that here, we're wasting tokens by remembering the chat history, since it's not really a conversation. There's a simple GPT class, which simply resets the message history after each prompt:

translate = GPT(messages=SimpleHistory.system("Translate the user message to English"))

await translate("Ich liebe Katzen!")
await translate("고양이랑 강아지 둘다 좋아!")

translate.messages.history

[{'role': 'system', 'content': 'Translate the user message to English'}]

OpenAI API: changing the model and mocked API

You can set the default model in the constructor, or override it for each prompt:

# default model is now gpt-4 💸
model = ChatGPT(model="gpt-4")

# but you can override it for each prompt anyways
await model("Heyo!", model="mocked")

'Hello, world!'

As you see, our wrapper provides a simple mocked "model", which will simply return "Hello, world!" for any prompt, with some simulated latency. This will also work with streaming responses, and in either cases, you won't be able to tell the difference between the real API and the mocked one.

There are more mocked models for your convinience:

  • mocked: always returns "Hello, world!"
  • mocked:TEST123: returns the string after the colon, e.g. "TEST123"
  • echo: returns the user prompt itself

Let's print all events to the console to take a peek at the event-based generator:

await model("Heya!", model="echo", event_logger=print)

[ResponseStartEvent]: prompt=Heya!, tools=None, model=echo, max_function_calls=100, openai_kwargs={}
[CompletionStartEvent]: call_index=0
[CompletionEvent]: completion=ChatCompletion(id='mock', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None))], created=1715958919, model='mock', object='chat.completion', system_fingerprint=None, usage=None), call_index=0
[FullMessageEvent]: message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None), choice_index=0
[TextMessageEvent]: content=Heya!
[ResponseEndEvent]: content=Heya!

'Heya!'

Wow, quite a lot going on for a simple prompt! While it might seem like too many events, this offers a lot of flexibility and customizability.

You can easily handle only the events you are interested in, useful when e.g:

  • updating the frontend when streaming the responses,
  • cancelling the generation early,
  • or implementing human-in-the-loop for function calls.

For instance, the GPT class from above is as simple as:

async for event in self.response_events(prompt, **openai_kwargs):
    match event:
        case self.ResponseEndEvent():
            await self.messages.reset()
            return event.content

This generator-based architecture is a good balance between flexibility and simplicity!

While we won't go deeper into the low-level API in this quickstart, you can look at the advanced.ipynb notebook for more details.

Tools: @function_tool

You can turn any function into a tool usable by the model by decorating it with @function_tool:

@function_tool
def print_to_console(text: str) -> str:
    """
    Print text to console

    Args:
        text: text to print
    """
    print(text)
    return "success"  # the model will see the return value


# normal call
print_to_console("Hello from python!")

Hello from python!

'success'

You can use the tool from python as you normally would, and the model will also be able to use it simply by passing it to the tools parameter during init (as default) or prompting it (as a one-off).

model = ChatGPT(tools=print_to_console)
await model("Say 'hello from GPT' to console!")

hello from GPT

"I have printed 'hello from GPT' to the console."

To make the function a @function_tool, you must do the following:

  • The parameters must be type annotated, and all parameters must be JSON-serializable (e.g. str, int, float, bool, list, dict, None, etc).
  • The return type should be a str or something that can be converted to a str.
  • It must be documented with a '''docstring''', including each parameter (most formats supported, e.g. Google-style, NumPy-style, sphinx-style, etc, see this overview)

Showing off some more goodies:

  • Even async functions should seamlessly work, just don't forget to await them.
  • @fail_with_message(err) is a decorator that will catch any exceptions thrown by the function and instead return the error message. This is useful for when you want to handle errors in a more graceful way than just crashing the model. It also takes an optional logger, which by default takes the print function, but any callable that takes a string will work, such as logger.error from the logging module.
  • Usually, the @function_tool decorator will throw an assertion error if you forget to provide the description for any of the function or their parameters. If you really don't want to provide descriptions for some (or all), maybe because it's so self-explanatory or you need to save tokens, then you can explicitly turn off the docstring parsing by passing @function_tool(check_description=False). This is not recommended, but it's there if you need it.

Note that by returning descriptive error strings, the model can read the error message and retry, increasing the robustness!

import asyncio
import logging


@function_tool(name="Fibonacci", require_doc=False)
@fail_with_message("Error", logger=logging.error)
async def fib(n: int):
    if n < 0:
        raise ValueError("n must be >= 0")
    if n < 2:
        return n

    await asyncio.sleep(0.1)
    return sum(await asyncio.gather(fib(n - 1), fib(n - 2)))


await fib(-10)

ERROR:root:Tool call fib(-10) failed: n must be >= 0

'Error: n must be >= 0'

Toolkits: class Toolkit

Toolkits are a collection of related function tools, esp. useful when they share a state. Also good for keeping the state bound to a single instance of the toolkit, rather than a global state. To create a toolkit, simply subclass Toolkit and decorate its methods with @function_tool.

class Notepad(Toolkit):
    def __init__(self):
        super().__init__()
        self.content = "<Fill me in>"

    @function_tool
    def write(self, text: str):
        """
        Write text to the notepad

        Args:
            text: The text to write
        """
        self.content = text

    @function_tool(require_doc=False)
    def read(self):
        return self.content


notes = Notepad()
notes.write("Shhh... here's a secret: 42")
notes.read()

"Shhh... here's a secret: 42"

As before, simply pass the toolkit to the model. To use multiple tools and toolkits, simply use the ToolList class:

model = ChatGPT(
    tools=ToolList(notes, print_to_console, fib),
)

await model("What's on my notepad?")

'The secret on your notepad is: 42'

await model(
    "Can you calculate the 8th fibonacci number, add it to the number in my notes, and write it? also print it to console as well.",
    event_logger=lambda x: print(x) if x.startswith("[Tool") else None,
)

[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_gDhzb8aiNmaJkUB6Z8tHZ7EU', function=Function(arguments='{"n": 8}', name='Fibonacci'), type='function'), ChatCompletionMessageToolCall(id='call_c9TjP8fWKTBrzie2TrzzOZeQ', function=Function(arguments='{}', name='read'), type='function')]
[ToolResultEvent]: result=Shhh... here's a secret: 42, tool_call=ChatCompletionMessageToolCall(id='call_c9TjP8fWKTBrzie2TrzzOZeQ', function=Function(arguments='{}', name='read'), type='function'), index=1
[ToolResultEvent]: result=21, tool_call=ChatCompletionMessageToolCall(id='call_gDhzb8aiNmaJkUB6Z8tHZ7EU', function=Function(arguments='{"n": 8}', name='Fibonacci'), type='function'), index=0
[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_EU9GTOVIQHeF2LeXxlVcjYlk', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number and the secret on your notepad: 63"}', name='write'), type='function')]
[ToolResultEvent]: result=None, tool_call=ChatCompletionMessageToolCall(id='call_EU9GTOVIQHeF2LeXxlVcjYlk', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number and the secret on your notepad: 63"}', name='write'), type='function'), index=0
[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_uIp1BSsPlmYc6dl0xgYX4s2h', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number and the secret on your notepad: 63"}', name='print_to_console'), type='function')]
The sum of the 8th Fibonacci number and the secret on your notepad: 63
[ToolResultEvent]: result=success, tool_call=ChatCompletionMessageToolCall(id='call_uIp1BSsPlmYc6dl0xgYX4s2h', function=Function(arguments='{"text":"The sum of the 8th Fibonacci number and the secret on your notepad: 63"}', name='print_to_console'), type='function'), index=0

'I have written the sum of the 8th Fibonacci number (21) and the secret on your notepad (42), which totals to 63. It has also been printed to the console.'

notes.read()

'The sum of the 8th Fibonacci number and the secret on your notepad: 63'

Notice how since our write function doesn't return anything, it defaults to None and our model gets confused! So don't forget to return an encouraging success message to make our model happy :)

Tool Previews

When using streaming, and you're using function tools with a long input, you might want to preview the tool's output before it's fully processed. With the help of the json_autocomplete package, the JSON argument generated by the model can be parsed before it's fully complete, and the preview can be shown to the user.

@function_tool(require_doc=False)
async def create_slogan(title: str, content: str):
    print(f"\n\n[Final Slogan] {title}: {content}")
    return "Slogan created and shown to user! Simply tell the user that it was created."


@create_slogan.preview
async def preview(title: str = "", content: str = ""):
    assert isinstance(title, str) and isinstance(content, str)
    print(f"[Preview] {title}: {content}", flush=True)
model = ChatGPT(tools=create_slogan)
await model(
    "Create a 1-sentence slogan about how ducks can help with debugging.", stream=True
)

[Preview] :
[Preview] D:
[Preview] Ducks:
[Preview] Ducks and:
[Preview] Ducks and Debug:
[Preview] Ducks and Debugging:
[Preview] Ducks and Debugging:
[Preview] Ducks and Debugging: Qu
[Preview] Ducks and Debugging: Quack
[Preview] Ducks and Debugging: Quack your
[Preview] Ducks and Debugging: Quack your way
[Preview] Ducks and Debugging: Quack your way to
[Preview] Ducks and Debugging: Quack your way to bug
[Preview] Ducks and Debugging: Quack your way to bug-free
[Preview] Ducks and Debugging: Quack your way to bug-free code
[Preview] Ducks and Debugging: Quack your way to bug-free code with
[Preview] Ducks and Debugging: Quack your way to bug-free code with the
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our feather
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our feathered
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our feathered friends
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our feathered friends.
[Preview] Ducks and Debugging: Quack your way to bug-free code with the help of our feathered friends.
[Final Slogan] Ducks and Debugging: Quack your way to bug-free code with the help of our feathered friends.

'I have created a slogan about how ducks can help with debugging.'

If you need a more coherent logic shared between the @preview and the final @function_tool, e.g. do something at the start of the function call, share some data between previews, etc... It gets messy very fast!

Instead, you can use the @streaming_function_tool() decorator, which receives a single arg_stream parameter, which is an async generator that yields the partial arguments, as streamed from the model. Therefore, you simply need to iterate through it, and perform the actual function call at the end of the iteration. The following is the equivalent of the previous example:

Note that currently, you must pass the parameter as a json_schema. Soon, this could be parsed from the docstring as usual.

from pydantic import BaseModel


class Slogan(BaseModel):
    title: str
    content: str


@streaming_function_tool(json_schema=Slogan.model_json_schema())
async def create_slogan(arg_stream):
    print("Starting slogan creation...")

    async for args in arg_stream:
        title, content = args.get("title", ""), args.get("content", "")
        print(f'{args} -> "{title}", "{content}"')

    print(f"\n\n[Final Slogan] {title}: {content}")
    return "Slogan created and shown to user! Simply tell the user that it was created."

Structured Data

We can very easily define a Pydantic model that can be generated by the LLM model, with validation and retries:

from pydantic import BaseModel, Field


class Song(BaseModel):
    title: str
    genres: list[str] = Field(description="AT LEAST 3 genres!")
    duration: float


# normal use
Song(title="Hello", genres=["pop"], duration=3.5)

Song(title='Hello', genres=['pop'], duration=3.5)

Create a StructGPT object with your pydantic model, and prompting it will always return a valid instance of the model, or raise an exception if it fails to generate a valid instance after the maximum number of retries. Your docstring and field descriptions will also be visible to the model, so make sure to write good descriptions!

generate_song = StructGPT(Song)

await generate_song("Come up with an all-time best K-hiphop song")

Song(title='Eung Freestyle', genres=['K-HipHop', 'Rap', 'Hip-Hop'], duration=192.0)

Misc.

Streaming can be enabled as usual by passing stream=True when prompting, and handle the partial events as they come in. Check the Assistant class for a list of events including the ones for streaming.

There are some other useful utilities in the utils module, such as:

  • tokens: for token counting
  • trackers: for transparent token tracking and prompt/response logging

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentools-0.4.5.tar.gz (31.8 kB view hashes)

Uploaded Source

Built Distribution

agentools-0.4.5-py3-none-any.whl (33.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page