Skip to main content

A LLM prompting framework for LLM agents

Project description

AgentKit: Flow Engineering with Graphs, not Coding

[Arxiv Paper] [PDF] [Docs]

PyPI - Python Version PyPI PyPI Status Docs GitHub license


offers a unified framework for explicitly constructing a complex human "thought process" from simple natural language prompts. The user puts together chains of nodes, like stacking LEGO pieces. The chains of nodes can be designed to explicitly enforce a naturally structured "thought process".

Different arrangements of nodes could represent different functionalities, allowing the user to integrate various functionalities to build multifunctional agents.

A basic agent could be implemented as simple as a list of prompts for the subtasks and therefore could be designed and tuned by someone without any programming experience.

Contents

Installation

Installing the AgentKit stable version is as simple as:

pip install agentkit-llm

To install AgentKit with wandb:

pip install agentkit-llm[logging]

To install AgentKit with OpenAI and Claude LLM-API support:

pip install agentkit-llm[proprietary]

To install AgentKit with full built-in LLM-API support (including llama):

pip install agentkit-llm[all]

Otherwise, to install the cutting edge version from the main branch of this repo, run:

git clone https://github.com/holmeswww/AgentKit && cd AgentKit
pip install -e .

Getting Started

The basic building block in AgentKit is a node, containing a natural language prompt for a specific subtask. The nodes are linked together by the dependency specifications, which specify the order of evaluation. Different arrangements of nodes can represent different different logic and throught processes.

At inference time, AgentKit evaluates all nodes in specified order as a directed acyclic graph (DAG).

import agentkit
from agentkit import Graph, BaseNode

import agentkit.llm_api

LLM_API_FUNCTION = agentkit.llm_api.get_query("gpt-4-turbo")

LLM_API_FUNCTION.debug = True # Disable this to enable API-level error handling-retry

graph = Graph()

subtask1 = "What are the pros and cons for using LLM Agents for Game AI?" 
node1 = BaseNode(subtask1, subtask1, graph, LLM_API_FUNCTION, agentkit.compose_prompt.BaseComposePrompt(), verbose=True)
graph.add_node(node1)

subtask2 = "Give me an outline for an essay titled 'LLM Agents for Games'." 
node2 = BaseNode(subtask2, subtask2, graph, LLM_API_FUNCTION, agentkit.compose_prompt.BaseComposePrompt(), verbose=True)
graph.add_node(node2)

subtask3 = "Now, write a full essay on the topic 'LLM Agents for Games'."
node3 = BaseNode(subtask3, subtask3, graph, LLM_API_FUNCTION, agentkit.compose_prompt.BaseComposePrompt(), verbose=True)
graph.add_node(node3)

# add dependencies between nodes
graph.add_edge(subtask1, subtask2)
graph.add_edge(subtask1, subtask3)
graph.add_edge(subtask2, subtask3)

result = graph.evaluate() # outputs a dictionary of prompt, answer pairs

LLM_API_FUNCTION can be any LLM API function that takes msg:list and shrink_idx:int, and outputs llm_result:str and usage:dict. Where msg is a prompt (OpenAI format by default), and shrink_idx:int is an index at which the LLM should reduce the length of the prompt in case of overflow.

AgentKit tracks token usage of each node through the LLM_API_FUNCTION with:

usage = {
    'prompt': $prompt token counts,
    'completion': $completion token counts,
}

Built-in LLM-API

The built-in agentkit.llm_api functions require installing with [proprietary] or [all] setting. See the installation guide for details.

Currently, the built-in API supports OpenAI and Anthropic, see https://pypi.org/project/openai/ and https://pypi.org/project/anthropic/ for details.

To use the OpenAI models, set environment variables OPENAI_KEY and OPENAI_ORG. Alternatively, you can put the openai 'key' and 'organization' in the first 2 lines of ~/.openai/openai.key.

To use the Azure OpenAI models, set environment variables AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_VERSION, AZURE_OPENAI_ENDPOINT, and AZURE_DEPLOYMENT_NAME. Alternatively, you can store the Azure OpenAI API key, API version, Azure endpoint, and deployment name in the first 4 lines of ~/.openai/azure_openai.key.

To use the Anthropic models, set environment variable ANTHROPIC_KEY. Alternatively, you can put the anthropic 'key' in 3rd line of ~/.openai/openai.key.

To use Ollama models, see https://github.com/ollama/ollama for installation instructions. Then set OLLAMA_URL and OLLAMA_TOKENIZER_PATH, or store OLLAMA_TOKENIZER_PATH, OLLAMA_URL in the first 2 lines of ~/.ollama/ollama_model.info.

LLM_API_FUNCTION = agentkit.llm_api.get_query("ollama-llama3")

Using AgentKit without Programming Experience

First, follow the installation guide to install AgentKit with [all] setting.

Then, set environment variables OPENAI_KEY and OPENAI_ORG to be your OpenAI key and org_key.

Finally, run the following to evoke the command line interface (CLI):

git clone https://github.com/holmeswww/AgentKit && cd AgentKit
cd examples/prompt_without_coding
python generate_graph.py

Node Components

Inside each node (as shown to the left of the figure), AgentKit runs a built-in flow that preprocesses the input (Compose), queries the LLM with a preprocessed input and prompt $q_v$, and optionally postprocesses the output of the LLM (After-query).

To support advanced capabilities such as branching, AgentKit offers API to dynamically modify the DAG at inference time (as shown to the right of the figure). Nodes/edges could be dynamically added or removed based on the LLM response at some ancestor nodes.

Commonly Asked Questions

Q: I'm using the default agentkit.llm_api, and graph.evaluate() seems to be stuck.

A: The LLM_API function catches and retries all API errors by default. Set verbose=True for each node to see which node you are stuck on, and LLM_API_FUNCTION.debug=True to see what error is causing the error.

Citing AgentKit

@inproceedings{agentkit,
  title = {AgentKit: Flow Engineering with Graphs, not Coding},
  author = {Wu, Yue and Fan, Yewen and Min, So Yeon and Prabhumoye, Shrimai and McAleer, Stephen and Bisk, Yonatan and Salakhutdinov, Ruslan and Li, Yuanzhi and Mitchell, Tom},
  year = {2024},
  booktitle = {COLM},
}

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentkit_llm-0.1.8.1.tar.gz (27.6 kB view details)

Uploaded Source

File details

Details for the file agentkit_llm-0.1.8.1.tar.gz.

File metadata

  • Download URL: agentkit_llm-0.1.8.1.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.19

File hashes

Hashes for agentkit_llm-0.1.8.1.tar.gz
Algorithm Hash digest
SHA256 213fb26089f8b13558f683876a48cbf5c47c3e290e878e8b6a0753891a5abac6
MD5 75a056325db650f9a6ff00354ca4b520
BLAKE2b-256 52ca86b3a799cfadc629d6a849dcdff65f943e8471b2a6be9969d6fb31682551

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page