Skip to main content

The Agent Lifecycle Toolkit (ALTK) is a library of components to help agent builders improve their agent with minimal integration effort and setup.

Project description

Agent Lifecycle Toolkit (ALTK) logo

Delivering plug-and-play, framework-agnostic technology to boost agents' performance

What is ALTK?

The Agent Lifecycle Toolkit helps agent builders create better performing agents by easily integrating our components into agent pipelines. The components help improve the performance of agents by addressing key gaps in various stages of the agent lifecycle, such as in reasoning, or tool calling errors, or output guardrails.

lifecycle.png

Installation

To use ALTK, simply install agent-lifecycle-toolkit from your package manager, e.g. pip:

pip install agent-lifecycle-toolkit

More detailed installation instructions are available in the docs.

Getting Started

Below is an end-to-end example that you can quickly get your hands dirty with. The example has a langgraph agent, a weather tool, and a component that checks for silent errors. Refer to the examples folder for this example and others. The below example will additionally require the langgraph and langchain-openai packages along with setting three environment variables.

import random

from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
from typing_extensions import Annotated
from langgraph.prebuilt import InjectedState

from altk.post_tool_reflection_toolkit.silent_review.silent_review import SilentReviewForJSONDataComponent
from altk.post_tool_reflection_toolkit.core.toolkit import SilentReviewRunInput, Outcome
from altk.toolkit_core.core.toolkit import AgentPhase

# Ensure that the following environment variables are set:
# OPENAI_API_KEY = *** openai api key ***
# LLM_PROVIDER = openai.sync
# MODEL_NAME = o4-mini

@tool
def get_weather(city: str, state: Annotated[dict, InjectedState]) -> str:
    """Get weather for a given city."""
    if random.random() >= 0.500:
        # Simulates a silent error from an external service
        result = {"weather": "Weather service is under maintenance."}
    else:
        result = {"weather": f"It's sunny and 70F in {city}!"}

    # Use SilentReview component to check if it's a silent error
    review_input = SilentReviewRunInput(messages=state["messages"], tool_response=result)
    reviewer = SilentReviewForJSONDataComponent()
    review_result = reviewer.process(data=review_input, phase=AgentPhase.RUNTIME)

    if review_result.outcome != Outcome.ACCOMPLISHED:
        # Agent should retry tool call if silent error was detected
        return "Silent error detected, retry the get_weather tool!"
    else:
        return result

agent = create_react_agent(
    model="openai:o4-mini",  
    tools=[get_weather],
    prompt="You are a helpful assistant"  
)

# Runs the agent
result = agent.invoke(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
) 
# Show the final result which should not be that the service is in maintenance.
print(result["messages"][-1].content)

Features

Lifecycle Step Component Description
Reasoning Spotlight SpotLight enables users to emphasize important spans within their prompt and steers the LLMs attention towards those spans. It is an inference-time hook and does not involve any training or changes to model weights.
Pre-tool Refraction Verify the syntax of tool call sequences and repair any errors that will result in execution failures
Pre-tool SPARC Evaluates tool calls before execution, identifying potential issues and suggesting corrections or transformations across multiple validation layers.
Post-tool Code Generation for JSON Processing If the agent calls tools which generate complex JSON objects as responses, this component will use LLM based Python code generation to process those responses and extract relevant information from them.
Post-tool Silent Error Review A prompt-based approach to identify silent errors in tool calls (errors that do not produce any visible or explicit error message); Determines whether the tool response is relevant, accurate and complete based on the user's query
Post-tool RAG Repair Given a failing tool call, this component attempts to use an LLM to repair the call while making use of domain documents such as documentation or troubleshooting examples via RAG. This component will require a set of related documents to ingest
Output Check Policy Guard Checks to see if the output from the agent adheres to the policy statement & repair the output if it doesn’t

Documentation

Check out ALTK's documentation, for details on installation, usage, concepts, and more (Coming Soon).

The ALTK supports multiple LLM providers and two methods of configuring the providers. For more information, see the LLMClient documentation.

Examples

Go hands-on with our examples.

Integrations

To further accelerate your AI application development, check out ALTK's native integrations with popular frameworks and tools (Coming Soon).

Get Help and Support

Please feel free to connect with us using the discussion section.

Contributing Guidelines

ALTK is open-source and we ❤️ contributions.

To help build ALTK, take a look at our: Contribution guidelines

Bugs

We use GitHub Issues to manage bugs. Before filing a new issue, please check to make sure it hasn't already been logged.

Code of Conduct

This project and everyone participating in it are governed by the Code of Conduct. By participating, you are expected to uphold this code. Please read the full text so that you know which actions may or may not be tolerated.

Legal notice

All content in these repositories including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.

License

The ALTK codebase is under Apache 2.0 license. For individual model usage, please refer to the model licenses in the original packages.

Contributors

Thanks to all of our contributors who make this project possible. Special thanks to the Global Agentic Middleware team in IBM Research for all of the contributions from the many different teams and people.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_lifecycle_toolkit-0.2.1.tar.gz (6.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_lifecycle_toolkit-0.2.1-py3-none-any.whl (156.9 kB view details)

Uploaded Python 3

File details

Details for the file agent_lifecycle_toolkit-0.2.1.tar.gz.

File metadata

File hashes

Hashes for agent_lifecycle_toolkit-0.2.1.tar.gz
Algorithm Hash digest
SHA256 f9323e66358ebddb1e5307ee83689d162968024c75f4a3356bdbe7d62eb7a5f6
MD5 405c80da24d63cee40b88894b43057f6
BLAKE2b-256 cb238ceb3498bf1fb85d2ef9c4a3593eda6ff719111039388e94038f3ac5b288

See more details on using hashes here.

File details

Details for the file agent_lifecycle_toolkit-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_lifecycle_toolkit-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b73fc62776a92da7585d8535a7c46fcdf1744a71e9718cddb542adb45ad0947c
MD5 99301772e503b30293ea2ee159254b90
BLAKE2b-256 991ed88f78bc30f6165223d69de53b373c5c7e8d416d6a70bca75cd061744cc7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page