Skip to main content

llama-index agent introspective integration

Project description

LlamaIndex Agent Integration: Introspective Agent

Introduction

This agent integration package includes three main agent classes:

  1. IntrospectiveAgentWorker
  2. ToolInteractiveReflectionAgentWorker
  3. SelfReflectionAgentWorker

These classes are used together in order to build an "Introspective" Agent which performs tasks while applying the reflection agentic pattern. In other words, an introspective agent produces an initial response to a task and then performs reflection and subsequently correction to produce an improved response to the task.

The IntrospectiveAgentWorker

cover

This is the agent that is responsible for performing the task while utilizing the reflection agentic pattern. It does so by merely delegating the work to two other agents in a purely deterministic fashion.

Specifically, when given a task, this agent delegates the task to first a MainAgentWorker that generates the initial response to the query. This initial response is then passed to the ReflectiveAgentWorker to perform the reflection and subsequent correction of the initial response. Optionally, the MainAgentWorker can be skipped if none is provided. In this case, the users input query will be assumed to contain the original response that needs to go thru reflection and correction.

The Reflection Agent Workers

These subclasses of the BaseAgentWorker are responsible for performing the reflection and correction iterations of responses (starting with the initial response from the MainAgentWorker). This package contains two reflection agent workers: ToolInteractiveReflectionAgentWorker and SelfReflectionAgentWorker.

The ToolInteractiveReflectionAgentWorker

This agent worker implements the CRITIC reflection framework introduced by Gou, Zhibin, et al. (2024) ICLR. (source: https://arxiv.org/pdf/2305.11738)

CRITIC stands for Correcting with tool-interactive critiquing. It works by performing a reflection on a response to a task/query using external tools (e.g., fact checking using a Google search tool) and subsequently using the critique to generate a corrected response. It cycles thru tool-interactive reflection and correction until a specific stopping criteria has been met or a max number of iterations has been reached.

The SelfReflectionAgentWorker

This agent performs a reflection without any tools on a given response and subsequently performs correction. Cycles of reflection and correction are executed until a satisfactory correction has been generated or a max number of cycles has been reached. To perform reflection, this agent utilizes a user-specified LLM along with a PydanticProgram to generate a structured output that contains an LLM generated reflection of the current response. After reflection, the same user-specified LLM is used again but this time with another PydanticProgram to generate a structured output that contains an LLM generated corrected version of the current response against the priorly generated reflection.

Usage

To build an introspective agent, we make use of the typical agent usage pattern, where we construct an IntrospectiveAgentWorker and wrap it with an AgentRunner. (Note this can be done convienently with the .as_agent() method of any AgentWorker class.)

IntrospectiveAgent using SelfReflectionAgentWorker

from llama_index.agent.introspective import IntrospectiveAgentWorker
from llama_index.agent.introspective import SelfReflectionAgentWorker
from llama_index.llms.openai import OpenAI
from llama_index.agent.openai import OpenAIAgentWorker

verbose = True
self_reflection_agent_worker = SelfReflectionAgentWorker.from_defaults(
    llm=OpenAI("gpt-4-turbo-preview"),
    verbose=verbose,
)
main_agent_worker = OpenAIAgentWorker.from_tools(
    tools=[], llm=OpenAI("gpt-4-turbo-preview"), verbose=verbose
)

introspective_worker_agent = IntrospectiveAgentWorker.from_defaults(
    reflective_agent_worker=self_reflection_agent_worker,
    main_agent_worker=main_agent_worker,
    verbose=True,
)

introspective_agent = introspective_worker_agent.as_agent(verbose=verbose)
introspective_agent.chat("...")

IntrospectiveAgent using ToolInteractiveReflectionAgentWorker

Unlike with self reflection, here we need to define another agent worker, namely the CritiqueAgentWorker that performs the reflection (or critique) using a specified set of tools.

from llama_index.llms.openai import OpenAI
from llama_index.agent.openai import OpenAIAgentWorker
from llama_index.agent.introspective import (
    ToolInteractiveReflectionAgentWorker,
)
from llama_index.core.agent import FunctionCallingAgentWorker

verbose = True
critique_tools = []
critique_agent_worker = FunctionCallingAgentWorker.from_tools(
    tools=[critique_tools], llm=OpenAI("gpt-3.5-turbo"), verbose=verbose
)

correction_llm = OpenAI("gpt-4-turbo-preview")
tool_interactive_reflection_agent_worker = (
    ToolInteractiveReflectionAgentWorker.from_defaults(
        critique_agent_worker=critique_agent_worker,
        critique_template=(
            "..."
        ),  # template containing instructions for performing critique
        correction_llm=correction_llm,
        verbose=verbose,
    )
)


introspective_worker_agent = IntrospectiveAgentWorker.from_defaults(
    reflective_agent_worker=tool_interactive_reflection_agent_worker,
    main_agent_worker=None,  # if None, then its assumed user input is initial response
    verbose=verbose,
)
introspective_agent = introspective_worker_agent.as_agent(verbose=verbose)
introspective_agent.chat("...")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_agent_introspective-0.3.1.tar.gz (11.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_agent_introspective-0.3.1-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_agent_introspective-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_agent_introspective-0.3.1.tar.gz
Algorithm Hash digest
SHA256 0dc6d4ff8e6cca8fbe213157c8825dd89ba270c9526161f2da3a402307cde0a3
MD5 8e4ea46ebc86db5d05add0b6c11c6cc7
BLAKE2b-256 81026c534f222cc65057035d1c3681a31fa5734d299b77ce5300c234e5042b3a

See more details on using hashes here.

File details

Details for the file llama_index_agent_introspective-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_agent_introspective-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 23a64eee0834f7b73816601290ba10bb117642ab2cb5d5964ab5b8e89910b442
MD5 1356bce1e7525aefa7d8319641660189
BLAKE2b-256 60333af02ca160cf134888098564b030721807fa2bb31bc8f7e29e46c01d5bb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page