Skip to main content

Qwen-Agent: Enhancing LLMs with Agent Workflows, RAG, Function Calling, and Code Interpreter.

Project description

中文 | English


Qwen-Agent is a framework for developing LLM applications based on the instruction following, tool usage, planning, and memory capabilities of Qwen. It also comes with example applications such as Browser Assistant, Code Interpreter, and Custom Assistant.

News

  • 🔥🔥🔥 Sep 18, 2024: Added Qwen2.5-Math Demo, supports accessing models via DashScope API, and allows running code locally to experience Tool-Integrated Reasoning capabilities of Qwen2.5-Math.

Getting Started

Installation

  • Install the stable version from PyPI:
pip install -U "qwen-agent[rag,code_interpreter,python_executor,gui]"
# Or `pip install -U qwen-agent` for minimal requirements if RAG and Code Interpreter are not being used.
  • Alternatively, you can install the latest development version from the source:
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./"[rag,code_interpreter,python_executor]"
# Or `pip install -e ./` for minimal requirements if RAG and Code Interpreter are not being used.

Optionally, please install the optional dependencies if built-in GUI support is needed via:

pip install -U "qwen-agent[gui,rag,code_interpreter]"
# Or install from the source via `pip install -e ./"[gui,rag,code_interpreter]"`

Preparation: Model Service

You can either use the model service provided by Alibaba Cloud's DashScope, or deploy and use your own model service using the open-source Qwen models.

  • If you choose to use the model service offered by DashScope, please ensure that you set the environment variable DASHSCOPE_API_KEY to your unique DashScope API key.

  • Alternatively, if you prefer to deploy and use your own model service, please follow the instructions provided in the README of Qwen2 for deploying an OpenAI-compatible API service. Specifically, consult the vLLM section for high-throughput GPU deployment or the Ollama section for local CPU (+GPU) deployment.

Developing Your Own Agent

Qwen-Agent offers atomic components, such as LLMs (which inherit from class BaseChatModel and come with function calling) and Tools (which inherit from class BaseTool), along with high-level components like Agents (derived from class Agent).

The following example illustrates the process of creating an agent capable of reading PDF files and utilizing tools, as well as incorporating a custom tool:

import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool


# Step 1 (Optional): Add a custom tool named `my_image_gen`.
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
    # The `description` tells the agent the functionality of this tool.
    description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
    # The `parameters` tell the agent what input parameters the tool has.
    parameters = [{
        'name': 'prompt',
        'type': 'string',
        'description': 'Detailed description of the desired image content, in English',
        'required': True
    }]

    def call(self, params: str, **kwargs) -> str:
        # `params` are the arguments generated by the LLM agent.
        prompt = json5.loads(params)['prompt']
        prompt = urllib.parse.quote(prompt)
        return json5.dumps(
            {'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
            ensure_ascii=False)


# Step 2: Configure the LLM you are using.
llm_cfg = {
    # Use the model service provided by DashScope:
    'model': 'qwen-max',
    'model_server': 'dashscope',
    # 'api_key': 'YOUR_DASHSCOPE_API_KEY',
    # It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not set here.

    # Use a model service compatible with the OpenAI API, such as vLLM or Ollama:
    # 'model': 'Qwen2-7B-Chat',
    # 'model_server': 'http://localhost:8000/v1',  # base_url, also known as api_base
    # 'api_key': 'EMPTY',

    # (Optional) LLM hyperparameters for generation:
    'generate_cfg': {
        'top_p': 0.8
    }
}

# Step 3: Create an agent. Here we use the `Assistant` agent as an example, which is capable of using tools and reading files.
system_instruction = '''You are a helpful assistant.
After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']  # `code_interpreter` is a built-in tool for executing code.
files = ['./examples/resource/doc.pdf']  # Give the bot a PDF file to read.
bot = Assistant(llm=llm_cfg,
                system_message=system_instruction,
                function_list=tools,
                files=files)

# Step 4: Run the agent as a chatbot.
messages = []  # This stores the chat history.
while True:
    # For example, enter the query "draw a dog and rotate it 90 degrees".
    query = input('user query: ')
    # Append the user query to the chat history.
    messages.append({'role': 'user', 'content': query})
    response = []
    for response in bot.run(messages=messages):
        # Streaming output.
        print('bot response:')
        pprint.pprint(response, indent=2)
    # Append the bot responses to the chat history.
    messages.extend(response)

In addition to using built-in agent implentations such as class Assistant, you can also develop your own agent implemetation by inheriting from class Agent. Please refer to the examples directory for more usage examples.

FAQ

Do you have function calling (aka tool calling)?

Yes. The LLM classes provide function calling. Additionally, some Agent classes also are built upon the function calling capability, e.g., FnCallAgent and ReActChat.

How to do question-answering over super-long documents involving 1M tokens?

We have released a fast RAG solution, as well as an expensive but competitive agent, for doing question-answering over super-long documents. They have managed to outperform native long-context models on two challenging benchmarks while being more efficient, and perform perfectly in the single-needle "needle-in-the-haystack" pressure test involving 1M-token contexts. See the blog for technical details.

Application: BrowserQwen

BrowserQwen is a browser assistant built upon Qwen-Agent. Please refer to its documentation for details.

Disclaimer

The code interpreter is not sandboxed, and it executes code in your own environment. Please do not ask Qwen to perform dangerous tasks, and do not directly use the code interpreter for production purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qwen-agent-0.0.10.tar.gz (7.0 MB view details)

Uploaded Source

Built Distribution

qwen_agent-0.0.10-py3-none-any.whl (7.1 MB view details)

Uploaded Python 3

File details

Details for the file qwen-agent-0.0.10.tar.gz.

File metadata

  • Download URL: qwen-agent-0.0.10.tar.gz
  • Upload date:
  • Size: 7.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.16

File hashes

Hashes for qwen-agent-0.0.10.tar.gz
Algorithm Hash digest
SHA256 db896e3c682df5f3a68ef51d0ba8a8ef5a91f3d8c0ab8cc009275100ec44143d
MD5 cfee1a6d1de3b99df89dc723a343b567
BLAKE2b-256 f43ef39ae3f2e2bf82b137ab7f9f68fc131af27c5b1021a0fa23a362e600d567

See more details on using hashes here.

File details

Details for the file qwen_agent-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: qwen_agent-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 7.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.16

File hashes

Hashes for qwen_agent-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 3cde09ecb5ca84f98e12dfd8b30d1503877c55c10b98ef4704345f926b5da7b2
MD5 de3864123b0330fb1dc5a17e52678bc2
BLAKE2b-256 8e261790402279c9af48eb6bdf71673528a8889f132f7fea10c166c6965785d7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page