Skip to main content

A lightweight framework for building LLM-based agents

Project description

Lagent: A lightweight framework for building LLM-based agents

English | 简体中文

Introduction

Lagent is a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents. It also provides some typical tools to augment LLM. The overview of our framework is shown below:

image

Major Features

  • Support multiple kinds of agents out of box. Lagent now supports ReAct, AutoGPT and ReWOO, which can drive the large language models(LLMs) for multiple trials of reasoning and function calling.

  • Extremely simple and easy to extend. The framework is quite simple with a clear structure. With only 20 lines of code, you are able to construct your own agent. It also supports three typical tools: Python interpreter, API call, and google search.

  • Support various large language models. We support different LLMs, including API-based (GPT-3.5/4) and open-source (LLaMA 2, InternLM) models.

Getting Started

Please see the overview for the general introduction of Lagent. Meanwhile, we provide extremely simple code for quick start. You may refer to examples for more details.

Installation

Install with pip (Recommended).

pip install lagent

Optionally, you could also build Lagent from source in case you want to modify the code:

git clone https://github.com/InternLM/lagent.git
cd lagent
pip install -e .

Run a ReWOO agent with GPT-3.5

Below is an example for running ReWOO with GPT-3.5

from lagent.agents import ReWOO
from lagent.actions import ActionExecutor, GoogleSearch, LLMQA
from lagent.llms import GPTAPI

llm = GPTAPI(model_type='gpt-3.5-turbo', key=['Your OPENAI_API_KEY'])
search_tool = GoogleSearch(api_key='Your SERPER_API_KEY')
llmqa_tool = LLMQA(llm)

chatbot = ReWOO(
    llm=llm,
    action_executor=ActionExecutor(
        actions=[search_tool, llmqa_tool]),
)

response = chatbot.chat('What profession does Nicholas Ray and Elia Kazan have in common')
print(response.response)
>>> Film director.

Run a ReAct agent with InternLM

NOTE: If you want to run a HuggingFace model, please run pip install -e .[all] first.

from lagent.agents import ReAct
from lagent.actions import ActionExecutor, GoogleSearch, PythonInterpreter
from lagent.llms import HFTransformer

llm = HFTransformer('internlm/internlm-7b-chat-v1.1')
search_tool = GoogleSearch(api_key='Your SERPER_API_KEY')
python_interpreter = PythonInterpreter()

chatbot = ReAct(
    llm=llm,
    action_executor=ActionExecutor(
        actions=[search_tool, python_interpreter]),
)

response = chatbot.chat('若$z=-1+\sqrt{3}i$,则$\frac{z}{{z\overline{z}-1}}=\left(\ \ \right)$')
print(response.response)
>>> $-\\frac{1}{3}+\\frac{{\\sqrt{3}}}{3}i$

License

This project is released under the Apache 2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lagent-0.1.1.tar.gz (26.8 kB view hashes)

Uploaded Source

Built Distribution

lagent-0.1.1-py3-none-any.whl (40.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page