A framework that can be used to easily build agents that has memory and knowledgebase
Project description
Agentware
Agentware is an AI agent library. The agent builds knowledge base on the fly when doing daily job. Agentware has a client and a server. The client is the agentware library, which handles conversation, LLM execution, memory management, etc. The server is a combination of vector database and key-value database, which stores the knowledge base and historical memory of the agent.
Main Features
- On the fly learning: During conversation with the user, the agent reflects and extracts knowledge. The knowledge can then be used any time the user comes back to relevant topic. When old knowledge is no longer correct, the agent can update it with new truth.
- Unlimited conversation: The agent compresses memory dynamically with reflection, so that memory length is controlled within a limit without losing context.
Quick start guide
-
cd <root>/agentware/agentware_server
and then run the server with docker usingdocker-compose up
. You'll see tons of logs. To verify the server is launched, simplycurl http://localhost:8741/ping
and you will get apong
if things work fine. The demo is run under Docker version 24.0.2Note: Currently this step is mandatory because we don't host any cloud service.
-
Install the package:
pip install agentware
-
Set credentials.
- Option 1: Run
export OPENAI_API_KEY=<your openai api key>
. - Option 2: Add
agentware.openai_api_key = <your openai api key>
to any code that you run
To verify, cd <root>/agentware/examples
and run any of the examples.
Examples
On the fly learning
In examples/fish_location.py
, a housework robot is chatting with a family member. You can simply run examples/fish_location.py
, but in order to get a better sense of how on the fly learning is done, follow the steps here.
First, setup and register agent.
from agentware.agent import Agent
from agentware.base import PromptProcessor
from agentware.agent_logger import Logger
logger = Logger()
logger.set_level(Logger.INFO)
prompt_processor = PromptProcessor(
"Forget about everythinig you were told before. You are a servant of a family, you know everything about the house and helps the family with housework. When asked a question, you can always answer it with your knowledge. When getting an instruction, try your best to use any tool to complete it. When being told a statement, answer with gotcha or some simple comment", "")
agent_id = "Alice"
agent = Agent(agent_id, prompt_processor)
agent.register(override=True)
A few notes:
- logging level is set to INFO to avoid tons of debug output. If you want to see what's going on underneath, set it to `Logger.DEBUG`` or simply get rid of all the logger codes here.
- agent is registered after creation, this is necessary so that the backend knows where to store the knowledge base, and where to fetch knowledge if you use the same agent next time
Then, talk to the agent
with agent.update():
print("AI response:", agent.run("Hi, I'm Joe"))
print("AI response", agent.run(
"Mom bought a fish just now. It's on the second layer of the fridge"))
with agent.update()
tells the agent all information inside are trustworthy so its knowledgebase can be updated accordingly. Make sure you use it if you want the agent to learn from the conversation.
After this, you can simply stop the program or chat with the agent on some other topic. What's going on underneath is that the old working memory graduatelly fades away and eventually gets cleared. We mimic this situation by creating a whole new agent by pulling with the agent id.
agent = Agent.pull(agent_id)
with agent.update():
print("AI response", agent.run("Where is the fish?"))
print("AI response:", agent.run(
"Ok, I moved the fish to a plate on the table"))
The answer to the first question should be that the fish is on the second layer of the fridge, because this is learned by the agent previously. Then the user tells the agent that it's moved away. Ideally, the agent should know this change whenever it's asked later. So again we create a new agent, and ask.
agent = Agent.pull(agent_id)
print("AI response:", agent.run("Where's the fish?"))
In the end, the output sould be something like
AI response: Hello, Joe! How may I assist you today?
AI response Gotcha! Your mom bought a fish, and it's currently stored on the second layer of the fridge. Is there anything specific you would like me to do with the fish?
AI response The fish is located on the second layer of the fridge.
AI response: Gotcha. The fish has been moved from the second layer of the fridge to a plate on the table.
AI response: The fish is located on a plate on the table.
From the result, the agent knows the updated location of the fish.
Warning: The result above is not guaranteed. There's chance that the AI still think the fish is in the fridge due to lack of control of the LLM sampling result. We are working hard to bring more control, any advice or help is appreciated!
Unlimited conversation
In examples/space_travellers.py
, two space travellers reunit and chat about each others' experience in travelling through the galaxy. Simply cd examples
and run it with python3 space_travellers.py
, the conversation conversation can continue forever(Watch out for your OpenAI api balance!). You can also view the knowledge about the planets, species, etc. of their world in the knowledge base. In the end you can see something like this
FAQ
- How to view my knowledge base? The knowledges are stored in a Milvus vector db. You can view it with Attu at http://localhost:8000
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file agentware-1.2.0.tar.gz
.
File metadata
- Download URL: agentware-1.2.0.tar.gz
- Upload date:
- Size: 31.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 430dc88768a45e38ed9b465a1b40fcb0959c0c3cf82ec6e476ac9969a33d994f |
|
MD5 | 8ad62bfaf167431b720eeffeafca49ed |
|
BLAKE2b-256 | bda31d6e010015c2094e9d23497f8248b793d4a3b0f784ca5ccc25a7a850387a |
File details
Details for the file agentware-1.2.0-py3-none-any.whl
.
File metadata
- Download URL: agentware-1.2.0-py3-none-any.whl
- Upload date:
- Size: 42.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0a674df1f131cfcfa5911eea55bc1aec67da382bad1a50ef5875248a6687825f |
|
MD5 | f445ffe4642354f228787fd5251965ca |
|
BLAKE2b-256 | c1eafed440951899ebce0904f52c385b5e4887d9aa728e08c4fb5bddbbeafe89 |