A minimal Python agent runtime with gateway and provider compatibility.
Project description
Memnixa English Docs
Back to Index | 中文文档 | Architecture
Overview
Memnixa is a Python agent runtime built around three ideas:
- Borrow the lightweight execution loop and persistence style from
nanobot - Borrow the provider compatibility and gateway/runtime separation from
openclaw - Keep the first version runnable while supporting CLI, gateway, SQLite persistence, multi-provider config, and per-session model switching
Current Capabilities
- Build runtime context from workspace data, system prompt, tools, and session history
- Call an OpenAI-compatible
/chat/completionsendpoint - Execute tools and feed tool results back into the model loop
- Run directly as a CLI or as a standalone gateway
- Let the gateway receive HTTP, Feishu, and QQ traffic
- Store sessions, messages, and session metadata in SQLite
- Configure multiple providers and switch models per session
- Support OpenClaw-style local skills discovery and on-demand loading
- Support canonical identity binding, with local CLI always treated as the owner
- Let owner-bound direct messages share context with the local
mainsession
Installation
uv tool install --editable .
After installation:
memnixa --help
You can also use the Makefile shortcuts:
make help
make sync
make install
make run
Quick Start
- Sync the default config into
~/.memnixa/config.json
memnixa config sync
- Check or open the config file
memnixa config path
memnixa config open
- Add a model after installation
The easiest path is to append one provider profile with the built-in command instead of editing JSON by hand.
For OpenAI:
memnixa config add-model \
--provider openai_chatgpt \
--api-key YOUR_OPENAI_API_KEY \
--model gpt-4.1-mini \
--id openai \
--label "OpenAI GPT-4.1 Mini" \
--set-default
For Zhipu Coding Plan:
memnixa config add-model \
--provider zhipu_coding_plan \
--api-key YOUR_ZHIPU_API_KEY \
--model glm-4.7 \
--id zhipu \
--label "Zhipu GLM-4.7" \
--set-default
For a custom OpenAI-compatible endpoint:
memnixa config add-model \
--provider custom_openai_compatible \
--api-key YOUR_API_KEY \
--api-base http://localhost:8000/v1 \
--model your-model-name \
--id local-model \
--label "Local Compatible Model" \
--set-default
After this, the config is written into ~/.memnixa/config.json or the current project's config.json, and Memnixa can connect to that model directly.
-
Optionally adjust the config manually for advanced fields
-
Start the CLI
memnixa
- Run one message
memnixa --message "Summarize this repository"
- Start the gateway
memnixa gateway
- Inspect the current identity inside a channel conversation
First, send this to Memnixa from Feishu or QQ:
/whoami
It returns the identity resolved for the current incoming message, including:
identity_statusactor_user_idactor_external_idactor_is_ownersession_id
The most important field here is actor_external_id, because you will use it for binding.
- Bind that channel identity to the owner locally
For Feishu:
memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID
For QQ:
memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID
After this, the system layer treats that external identity as the owner. The model does not decide this on its own.
- Continue chatting from the channel DM or the local CLI
Current routing rules:
- Local
cliis always treated as the owner and always entersmain - Direct messages bound to the owner also enter
main - Unbound identities or group messages do not merge into
main
So once you bind a Feishu or QQ direct-message identity to the owner, the local CLI and that direct-message thread share the same context.
- Inspect SQLite data
memnixa data dump
memnixa data dump --session-id main
memnixa data export-memory
Common make targets:
make helpmake syncmake sync-devmake installmake reinstallmake runmake gatewaymake testmake fmtmake lint
Config Shape
Key fields:
default_provider: default provider id for new sessionsproviders: list of configured model endpointsproviders[].id: stable id used by/use <id>providers[].preset: provider preset name Built-in presets includeopenai_chatgpt,zhipu_coding_plan,siliconflow, andcustom_openai_compatibleproviders[].api_key: provider API keyproviders[].api_base: optional compatible base URLproviders[].model: model nameproviders[].label: display labelproviders[].context_window_tokens: optional per-model context window limitproviders[].max_output_tokens: optional per-model output reserveworkspace: workspace used by tools and contextdatabase_path: SQLite database pathcli_via:local/gateway/autocontext_window_tokens: global default context windowmax_output_tokens: global default output reservecontext_warn_threshold_ratio: warning threshold when nearing the windowcontext_compact_threshold_ratio: threshold that triggers compactioncontext_safety_margin_tokens: conservative input headroomcontext_compaction_max_rounds: maximum preflight compaction roundsmemory.enabled: enable long-term memory extraction and retrievalmemory.extraction_provider_id: required when memory is enabled; must point to a valid configured provider idmemory.extraction_timeout_seconds: dedicated timeout for the memory extraction callmemory.retrieval_limit: maximum number of memory hits injected or returned per searchmemory.max_injected_chars: maximum characters injected from recalled memory into the model context
Channel fields:
channels.feishu.enabledchannels.feishu.app_idchannels.feishu.app_secretchannels.feishu.group_policychannels.qq.enabledchannels.qq.app_idchannels.qq.secret
Dynamic Model Switching
Each provider has a unique id, for example:
{
"default_provider": "1",
"providers": [
{ "id": "1", "preset": "zhipu_coding_plan", "api_key": "...", "model": "glm-4.7" },
{ "id": "2", "preset": "openai_chatgpt", "api_key": "...", "model": "gpt-4.1-mini" },
{ "id": "3", "preset": "siliconflow", "api_key": "...", "model": "deepseek-ai/DeepSeek-V3" }
]
}
Inside a conversation:
/models
/use 2
/whoami
/modelslists configured models/use <id>switches only the current session/whoamishows the identity resolution result for the current message- The selected provider id is stored in SQLite session metadata
Long-Term Memory
Memnixa now supports a first-pass long-term memory layer on top of the existing session history and compaction summary.
Design:
- Session history remains the source of short-term continuity
session_compactionsstill store compacted session summaries only- Long-term memory is stored separately in SQLite
memory_items - Memory is scoped by
self,agent,user, orsession
When memory is enabled:
- Memnixa injects recalled durable memory as an extra system message before the active turn
- The model can actively call
memory_searchandmemory_get - After a turn finishes, Memnixa calls the configured memory extraction provider to propose durable facts, then validates and stores them
Config
Add a dedicated memory block:
{
"memory": {
"enabled": true,
"extraction_provider_id": "8",
"extraction_timeout_seconds": 20,
"retrieval_limit": 5,
"max_injected_chars": 2400
}
}
Rules:
memory.enabled = truerequiresmemory.extraction_provider_idmemory.extraction_provider_idmust match a real id inproviders[]- It is recommended to use a cheaper small model, such as an 8B-class model, as the extractor provider
Suggested Multi-Provider Setup
Use one main model for the normal agent loop and one smaller model for memory extraction:
{
"default_provider": "1",
"providers": [
{
"id": "1",
"preset": "zhipu_coding_plan",
"api_key": "YOUR_MAIN_KEY",
"model": "glm-4.7"
},
{
"id": "8",
"preset": "custom_openai_compatible",
"api_key": "YOUR_MEMORY_KEY",
"api_base": "http://localhost:11434/v1",
"model": "qwen-memory-8b",
"label": "Memory Extractor 8B"
}
],
"memory": {
"enabled": true,
"extraction_provider_id": "8",
"extraction_timeout_seconds": 20
}
}
Memory Tools
When memory is enabled, the runtime registers:
memory_search: search durable memories relevant to the current requestmemory_get: inspect one memory item returned bymemory_search
Export
You can export all stored long-term memory items through either interface:
memnixa data export-memory
Or through the gateway:
GET /v1/memory/export
What Gets Stored
The extractor is expected to produce durable facts such as:
- preferences
- constraints
- corrections
- goals
- project facts
- self-model facts
- decisions
- todos
- user profile details
Sensitive data such as API keys, passwords, cookies, and tokens are filtered and should not be stored as memory items.
The self_model type is stored under the fixed scope self:memnixa. Use it for the agent's stable identity, role, capability boundaries, and long-lived behavior contract. It applies across users, sessions, and workspaces.
Skills
Memnixa now supports a first-pass OpenClaw-style local skills system.
Design:
- The runtime discovers local skill directories containing
SKILL.md - The system prompt includes only the available skill list, not every skill body
- The model reads a selected skill on demand through
skill_read - The model can inspect what is available with
skill_list
Currently supported discovery roots, from highest to lowest precedence:
<workspace>/skills<workspace>/.agents/skills~/.agents/skills~/.memnixa/skills
Each skill directory should contain at least one SKILL.md. AgentSkills-style frontmatter is recommended:
---
name: release-checklist
description: Use when preparing a release checklist or release notes.
---
# Release Checklist
Always confirm version, changelog, and tests.
Runtime skill tools:
skill_list: list the currently available skillsskill_read: read the chosen skill'sSKILL.mdor another bundled reference file
Identity Binding
If you want one Feishu or QQ direct-message conversation to share the owner's context with the local CLI, use this flow:
- Send
/whoamiin that direct-message conversation - Copy the returned
actor_external_id - Run
memnixa identity bind-owner --channel <channel> --external-id <id>locally - Later direct messages from that bound identity are routed into the owner
mainsession
Available commands:
memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID
memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID
memnixa identity list
Notes:
bind-owneris a local CLI management command used to bind one external identity to the owneridentity listprints canonical users and stored external identity bindings- Binding direct-message identities is recommended before binding any group-side identities
User Home
The default user-level data directory is ~/.memnixa:
~/.memnixa/config.json~/.memnixa/memnixa.db~/.memnixa/cli_history
Without --config, config lookup order is:
./config.json~/.memnixa/config.json
Notes
- The model layer currently focuses on OpenAI-compatible APIs first
- Built-in tools are
list_dir,read_file,write_file, andrun_command - When session history approaches the context budget, Memnixa compacts older turns into a summary and keeps the recent active tail
- If the provider returns a direct context overflow error, Memnixa tries to compact and retry automatically
memnixastarts the interactive CLI by defaultmemnixa gatewaystarts HTTP and any enabled channel listeners
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memnixa-0.1.0.tar.gz.
File metadata
- Download URL: memnixa-0.1.0.tar.gz
- Upload date:
- Size: 86.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b4c10b2ceacfe9d9fbb47123c6ee332ffbbf31d92a922849f991734e07c00ca
|
|
| MD5 |
66e9d2ca731a7ef26276a106ddb05688
|
|
| BLAKE2b-256 |
dd18e8dbc9fad1f5276a86e3dbd71514a42c64c4ae053745870a10b185a7ad1a
|
File details
Details for the file memnixa-0.1.0-py3-none-any.whl.
File metadata
- Download URL: memnixa-0.1.0-py3-none-any.whl
- Upload date:
- Size: 86.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ab95bd0608c7042c2404bafaec6914310e2382ded0e3000491199d3d3d62be5d
|
|
| MD5 |
dd4480a1ce8b54f16096ee25401f6286
|
|
| BLAKE2b-256 |
ba37d5caa19e448008f56e6b65222fbbbba056a0266fada84a5701282d8f345f
|