LLMCompiler
Project description
LLMCompiler
LLMCompiler is an Agent Architecture designed to speed up the execution of agent tasks by executing them quickly in the DAG. It also saves the cost of redundant token use by reducing the number of calls to the LLM. The realization inspiration comes from An LLM Compiler for Parallel Function Calling.
Here is an example of using SQL to query data to illustrate the core role of the framework. The core process of generating an execution plan for SQL includes syntax parsing, semantic analysis, optimizer intervention, and generation of an execution plan. When LLMCompiler executes tool calls based on user instructions, it can actually be understood that LLM helps users do a process similar to SQL to generate execution plans, but the generated plan here is a DAG, and the DAG describes the call relationship between tools and the parameter dependency passing logic.
This implementation is useful when the agent needs to call a large number of tools. If the tool you need exceeds the context limit of the LLM, you can extend the agent node based on this tool.Divide the tool into different agent and assemble them to create a more powerful LLMCompiler. Another case has been proven in a production-level application, when about 60 Tools were configured, and the accuracy rate was more than 90% when paired with few-shot.
LLMCompiler Frame Diagram
Task Fetching Unit
How To Use
pip install llmcompiler
from llmcompiler.result.chat import ChatRequest
from llmcompiler.tools.tools import DefineTools
from langchain_openai.chat_models.base import ChatOpenAI
from llmcompiler.chat.run import RunLLMCompiler
chat = ChatRequest(message="<YOUR_MESSAGE>")
# Langchain BaseTool List.
# The default configuration is only for demonstration, and it is recommended to inherit BaseTool to implement Tool, so that you can better control some details.
# For multi-parameter dependencies, DAGFlowParams can be inherited, and the implementation reference is 'llmcompiler/tools/basetool/fund_basic.py'.
tools = DefineTools().tools()
# The implementation class of BaseLanguageModel is supported.
llm = ChatOpenAI(model="gpt-4o", temperature=0, max_retries=3)
llm_compiler = RunLLMCompiler(chat, tools, llm)
# Run the full LLMCompiler process.
print(llm_compiler())
# Ignore the joiner process and return the task and execution result directly.
print(llm_compiler.runWithoutJoiner())
# More ways to use it can be discussed in the issue, and I will continue to improve the documentation in the future.
Reference Linking
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llmcompiler-1.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 31cd4029cc5ca551facb548840700ed8a7b6fe56dac393bed71be3117a0886ca |
|
MD5 | df53ad77b24abe8001293efcfbc66723 |
|
BLAKE2b-256 | 967bdd4f4a07c1260d840dfe8b962d81d013369cecaef77b709c79b84d86c6f9 |