A simple, lightweight, and reliable framework for building personal AI assistants.
Project description
Simple. Lightweight. Reliable.
pal is a minimal framework for building personal AI assistants — connect a channel, give it tools and memory, and it's ready to work.
Key Features
- Minimal by design. Agent + Pal + channel — three concepts, no more. The entire framework fits in your head.
- Persistent out of the box. Conversation history and long-term memory are saved to
~/.pal/automatically. Restart anytime, pick up where you left off. - Memory is the agent's business. The agent decides when to recall and what to note — no hard-coded pipelines, no mandatory hooks.
- Pluggable everywhere. Swap the LLM (18+ providers via chak), the memory backend, or the channel — each is an independent interface with a one-file implementation.
Quick Start
Installation
pip install palbot
Minimal example — Slack bot in 20 lines
import os
from pal import Pal, Agent
from pal.tools import Bash, Python, Web, Search
from pal.messaging.slack import Slack
agent = Agent(
model_uri="openai/gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
tools=[Bash(), Python(), Web(), Search()],
)
pal = Pal(
agent=agent,
channels=[Slack(
bot_token=os.environ["SLACK_BOT_TOKEN"],
app_token=os.environ["SLACK_APP_TOKEN"],
)],
)
pal.run()
Add long-term memory — 3 extra lines
from pal.memory.seeka import SeekaMemory
mem = SeekaMemory(llm_uri="openai/gpt-4o-mini", llm_api_key=os.environ["OPENAI_API_KEY"])
agent = Agent(..., memory=mem)
That's it. The agent now remembers facts across sessions, recalls relevant context automatically, and stores everything in ~/.pal/memory — no path configuration required.
How It Works
User message
→ Pal receives it from the channel
→ Agent runs: LLM + tools (as many turns as needed)
→ Pal sends the reply
→ Conversation saved to ~/.pal/conversations/
→ Memory consolidated in the background (~/.pal/memory/)
Agent handles the reasoning loop — it calls the LLM, executes tools, and evaluates whether the task is complete. It keeps going until it is.
Pal is the runtime — it wires a channel to an agent, handles concurrency, and triggers background tasks (memory consolidation, conversation persistence) after each turn.
Memory is optional and pluggable. The agent decides when to recall and what to note — memory tools are injected automatically when a memory backend is configured.
Memory
pal ships with a seeka backend. Memory is split across two layers:
| Layer | What | Where |
|---|---|---|
| Conversation history | Raw LLM message list, persisted across restarts | ~/.pal/conversations/{agent_id}.json |
| Long-term memory | Structured facts extracted from conversations, semantic recall | ~/.pal/memory/ |
Both layers are zero-config. The paths exist automatically on first use.
The agent controls memory. It calls note() to record facts and recall() to retrieve them. The runtime calls dream() in the background after each turn to consolidate raw notes into structured memories.
# The agent sees these as tools and decides when to use them:
# note(content) — save something worth remembering
# recall(query) — search long-term memory before responding
To add a different memory backend, subclass BaseMemory and implement note, recall, and process.
Channels
| Channel | Class | Notes |
|---|---|---|
| Slack | pal.messaging.slack.Slack |
Socket Mode, supports file attachments |
More channels coming. To add your own, subclass BaseMessaging.
Agent
Agent(
model_uri="openai/gpt-4o-mini", # any chak model URI
api_key="sk-...",
system_prompt="...", # optional
tools=[...], # any chak-compatible tools
memory=SeekaMemory(...), # optional
max_turns=5, # max LLM iterations per task
agent_id="default", # used for conversation file naming
)
pal uses chak for LLM calls. Any model URI supported by chak works:
| URI | Provider |
|---|---|
openai/gpt-4o-mini |
OpenAI |
anthropic/claude-3-5-sonnet |
Anthropic |
google/gemini-1.5-pro |
Google Gemini |
bailian/qwen-plus |
Alibaba Bailian |
deepseek/deepseek-chat |
DeepSeek |
ollama/llama3.1 |
Ollama (local) |
provider@https://base-url/model |
Any OpenAI-compatible endpoint |
Tools
pal works with any tool supported by chak. The standard library (chak.tools.std) ships ready to use:
| Tool | What it does |
|---|---|
Bash |
Execute shell commands |
Python |
Run Python code snippets |
FileSystem |
Read, write, edit, list files |
Web |
Fetch and extract web page content |
Search |
Web search (Tavily → Brave → DuckDuckGo) |
Http |
Full HTTP client (GET / POST / PUT / PATCH / DELETE) |
Pdf |
Extract text and tables from PDF files |
from pal.tools import Bash, Python, FileSystem, Web, Search, Http, Pdf
agent = Agent(..., tools=[Bash(), Python(), FileSystem(), Web(), Search(), Http(), Pdf()])
Is pal right for you?
pal is a good fit if:
- You want a working Slack bot (or similar) with tools and memory, not a framework to study.
- You want the agent to own its reasoning — no hand-coded pipelines, no fixed workflows.
- You need to ship quickly and keep the codebase readable.
pal is not a good fit if you need multi-tenant deployments, complex routing between specialized agents, or production-grade observability out of the box.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file palbot-0.1.0.tar.gz.
File metadata
- Download URL: palbot-0.1.0.tar.gz
- Upload date:
- Size: 27.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6360d5f3a6a218638b76137af4faa26ebea68d076b7cb9e9880e84822c453fce
|
|
| MD5 |
21c253cf268cd43f9c4aa5c0053f5f3f
|
|
| BLAKE2b-256 |
513751f37fdccea718416d9df75cf08f96f4f191b0b8ff9283528611dfe9735e
|
Provenance
The following attestation bundles were made for palbot-0.1.0.tar.gz:
Publisher:
publish.yml on zhixiangxue/pal-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
palbot-0.1.0.tar.gz -
Subject digest:
6360d5f3a6a218638b76137af4faa26ebea68d076b7cb9e9880e84822c453fce - Sigstore transparency entry: 1368544879
- Sigstore integration time:
-
Permalink:
zhixiangxue/pal-ai@9404ab956e830be6084e0cc8b5ed225997b73aac -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/zhixiangxue
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9404ab956e830be6084e0cc8b5ed225997b73aac -
Trigger Event:
release
-
Statement type:
File details
Details for the file palbot-0.1.0-py3-none-any.whl.
File metadata
- Download URL: palbot-0.1.0-py3-none-any.whl
- Upload date:
- Size: 14.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4898339115fb78fda1c15ccd69a1266f7ab42aa73c33fcc2ad62d2c4ca59160b
|
|
| MD5 |
9af52f5bf5fe57dd93ed3581e13cdf87
|
|
| BLAKE2b-256 |
9934adf2db820a0969a0ea904c42ac4406ac2f54aa3d601dd514bf79ce5f8d5d
|
Provenance
The following attestation bundles were made for palbot-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on zhixiangxue/pal-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
palbot-0.1.0-py3-none-any.whl -
Subject digest:
4898339115fb78fda1c15ccd69a1266f7ab42aa73c33fcc2ad62d2c4ca59160b - Sigstore transparency entry: 1368545067
- Sigstore integration time:
-
Permalink:
zhixiangxue/pal-ai@9404ab956e830be6084e0cc8b5ed225997b73aac -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/zhixiangxue
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9404ab956e830be6084e0cc8b5ed225997b73aac -
Trigger Event:
release
-
Statement type: