Skip to main content

A minimal Python agent runtime with gateway and provider compatibility.

Project description

Memnixa English Docs

Back to Index | 中文文档 | Architecture

Overview

Memnixa is a Python agent runtime built around three ideas:

  • Borrow the lightweight execution loop and persistence style from nanobot
  • Borrow the provider compatibility and gateway/runtime separation from openclaw
  • Keep the first version runnable while supporting CLI, gateway, SQLite persistence, multi-provider config, and per-session model switching

Current Capabilities

  • Build runtime context from workspace data, system prompt, tools, and session history
  • Call an OpenAI-compatible /chat/completions endpoint
  • Execute tools and feed tool results back into the model loop
  • Run directly as a CLI or as a standalone gateway
  • Let the gateway receive HTTP, Feishu, and QQ traffic
  • Store sessions, messages, and session metadata in SQLite
  • Configure multiple providers and switch models per session
  • Support OpenClaw-style local skills discovery and on-demand loading
  • Support canonical identity binding, with local CLI always treated as the owner
  • Let owner-bound direct messages share context with the local main session

Installation

uv tool install --editable .

After installation:

memnixa --help

You can also use the Makefile shortcuts:

make help
make sync
make install
make run

Quick Start

  1. Sync the default config into ~/.memnixa/config.json
memnixa config sync
  1. Check or open the config file
memnixa config path
memnixa config open
  1. Add a model after installation

The easiest path is to append one provider profile with the built-in command instead of editing JSON by hand.

For OpenAI:

memnixa config add-model \
  --provider openai_chatgpt \
  --api-key YOUR_OPENAI_API_KEY \
  --model gpt-4.1-mini \
  --id openai \
  --label "OpenAI GPT-4.1 Mini" \
  --set-default

For Zhipu Coding Plan:

memnixa config add-model \
  --provider zhipu_coding_plan \
  --api-key YOUR_ZHIPU_API_KEY \
  --model glm-4.7 \
  --id zhipu \
  --label "Zhipu GLM-4.7" \
  --set-default

For a custom OpenAI-compatible endpoint:

memnixa config add-model \
  --provider custom_openai_compatible \
  --api-key YOUR_API_KEY \
  --api-base http://localhost:8000/v1 \
  --model your-model-name \
  --id local-model \
  --label "Local Compatible Model" \
  --set-default

After this, the config is written into ~/.memnixa/config.json or the current project's config.json, and Memnixa can connect to that model directly.

  1. Optionally adjust the config manually for advanced fields

  2. Start the CLI

memnixa
  1. Run one message
memnixa --message "Summarize this repository"
  1. Start the gateway
memnixa gateway
  1. Inspect the current identity inside a channel conversation

First, send this to Memnixa from Feishu or QQ:

/whoami

It returns the identity resolved for the current incoming message, including:

  • identity_status
  • actor_user_id
  • actor_external_id
  • actor_is_owner
  • session_id

The most important field here is actor_external_id, because you will use it for binding.

  1. Bind that channel identity to the owner locally

For Feishu:

memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID

For QQ:

memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID

After this, the system layer treats that external identity as the owner. The model does not decide this on its own.

  1. Continue chatting from the channel DM or the local CLI

Current routing rules:

  • Local cli is always treated as the owner and always enters main
  • Direct messages bound to the owner also enter main
  • Unbound identities or group messages do not merge into main

So once you bind a Feishu or QQ direct-message identity to the owner, the local CLI and that direct-message thread share the same context.

  1. Inspect SQLite data
memnixa data dump
memnixa data dump --session-id main
memnixa data export-memory

Common make targets:

  • make help
  • make sync
  • make sync-dev
  • make install
  • make reinstall
  • make run
  • make gateway
  • make test
  • make fmt
  • make lint

Config Shape

Key fields:

  • default_provider: default provider id for new sessions
  • providers: list of configured model endpoints
  • providers[].id: stable id used by /use <id>
  • providers[].preset: provider preset name Built-in presets include openai_chatgpt, zhipu_coding_plan, siliconflow, and custom_openai_compatible
  • providers[].api_key: provider API key
  • providers[].api_base: optional compatible base URL
  • providers[].model: model name
  • providers[].label: display label
  • providers[].context_window_tokens: optional per-model context window limit
  • providers[].max_output_tokens: optional per-model output reserve
  • workspace: workspace used by tools and context
  • database_path: SQLite database path
  • cli_via: local / gateway / auto
  • context_window_tokens: global default context window
  • max_output_tokens: global default output reserve
  • context_warn_threshold_ratio: warning threshold when nearing the window
  • context_compact_threshold_ratio: threshold that triggers compaction
  • context_safety_margin_tokens: conservative input headroom
  • context_compaction_max_rounds: maximum preflight compaction rounds
  • memory.enabled: enable long-term memory extraction and retrieval
  • memory.extraction_provider_id: required when memory is enabled; must point to a valid configured provider id
  • memory.extraction_timeout_seconds: dedicated timeout for the memory extraction call
  • memory.retrieval_limit: maximum number of memory hits injected or returned per search
  • memory.max_injected_chars: maximum characters injected from recalled memory into the model context

Channel fields:

  • channels.feishu.enabled
  • channels.feishu.app_id
  • channels.feishu.app_secret
  • channels.feishu.group_policy
  • channels.qq.enabled
  • channels.qq.app_id
  • channels.qq.secret

Dynamic Model Switching

Each provider has a unique id, for example:

{
  "default_provider": "1",
  "providers": [
    { "id": "1", "preset": "zhipu_coding_plan", "api_key": "...", "model": "glm-4.7" },
    { "id": "2", "preset": "openai_chatgpt", "api_key": "...", "model": "gpt-4.1-mini" },
    { "id": "3", "preset": "siliconflow", "api_key": "...", "model": "deepseek-ai/DeepSeek-V3" }
  ]
}

Inside a conversation:

/models
/use 2
/whoami
  • /models lists configured models
  • /use <id> switches only the current session
  • /whoami shows the identity resolution result for the current message
  • The selected provider id is stored in SQLite session metadata

Long-Term Memory

Memnixa now supports a first-pass long-term memory layer on top of the existing session history and compaction summary.

Design:

  • Session history remains the source of short-term continuity
  • session_compactions still store compacted session summaries only
  • Long-term memory is stored separately in SQLite memory_items
  • Memory is scoped by self, agent, user, or session

When memory is enabled:

  • Memnixa injects recalled durable memory as an extra system message before the active turn
  • The model can actively call memory_search and memory_get
  • After a turn finishes, Memnixa calls the configured memory extraction provider to propose durable facts, then validates and stores them

Config

Add a dedicated memory block:

{
  "memory": {
    "enabled": true,
    "extraction_provider_id": "8",
    "extraction_timeout_seconds": 20,
    "retrieval_limit": 5,
    "max_injected_chars": 2400
  }
}

Rules:

  • memory.enabled = true requires memory.extraction_provider_id
  • memory.extraction_provider_id must match a real id in providers[]
  • It is recommended to use a cheaper small model, such as an 8B-class model, as the extractor provider

Suggested Multi-Provider Setup

Use one main model for the normal agent loop and one smaller model for memory extraction:

{
  "default_provider": "1",
  "providers": [
    {
      "id": "1",
      "preset": "zhipu_coding_plan",
      "api_key": "YOUR_MAIN_KEY",
      "model": "glm-4.7"
    },
    {
      "id": "8",
      "preset": "custom_openai_compatible",
      "api_key": "YOUR_MEMORY_KEY",
      "api_base": "http://localhost:11434/v1",
      "model": "qwen-memory-8b",
      "label": "Memory Extractor 8B"
    }
  ],
  "memory": {
    "enabled": true,
    "extraction_provider_id": "8",
    "extraction_timeout_seconds": 20
  }
}

Memory Tools

When memory is enabled, the runtime registers:

  • memory_search: search durable memories relevant to the current request
  • memory_get: inspect one memory item returned by memory_search

Export

You can export all stored long-term memory items through either interface:

memnixa data export-memory

Or through the gateway:

GET /v1/memory/export

What Gets Stored

The extractor is expected to produce durable facts such as:

  • preferences
  • constraints
  • corrections
  • goals
  • project facts
  • self-model facts
  • decisions
  • todos
  • user profile details

Sensitive data such as API keys, passwords, cookies, and tokens are filtered and should not be stored as memory items.

The self_model type is stored under the fixed scope self:memnixa. Use it for the agent's stable identity, role, capability boundaries, and long-lived behavior contract. It applies across users, sessions, and workspaces.

Skills

Memnixa now supports a first-pass OpenClaw-style local skills system.

Design:

  • The runtime discovers local skill directories containing SKILL.md
  • The system prompt includes only the available skill list, not every skill body
  • The model reads a selected skill on demand through skill_read
  • The model can inspect what is available with skill_list

Currently supported discovery roots, from highest to lowest precedence:

  • <workspace>/skills
  • <workspace>/.agents/skills
  • ~/.agents/skills
  • ~/.memnixa/skills

Each skill directory should contain at least one SKILL.md. AgentSkills-style frontmatter is recommended:

---
name: release-checklist
description: Use when preparing a release checklist or release notes.
---

# Release Checklist

Always confirm version, changelog, and tests.

Runtime skill tools:

  • skill_list: list the currently available skills
  • skill_read: read the chosen skill's SKILL.md or another bundled reference file

Identity Binding

If you want one Feishu or QQ direct-message conversation to share the owner's context with the local CLI, use this flow:

  1. Send /whoami in that direct-message conversation
  2. Copy the returned actor_external_id
  3. Run memnixa identity bind-owner --channel <channel> --external-id <id> locally
  4. Later direct messages from that bound identity are routed into the owner main session

Available commands:

memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID
memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID
memnixa identity list

Notes:

  • bind-owner is a local CLI management command used to bind one external identity to the owner
  • identity list prints canonical users and stored external identity bindings
  • Binding direct-message identities is recommended before binding any group-side identities

User Home

The default user-level data directory is ~/.memnixa:

  • ~/.memnixa/config.json
  • ~/.memnixa/memnixa.db
  • ~/.memnixa/cli_history

Without --config, config lookup order is:

  1. ./config.json
  2. ~/.memnixa/config.json

Notes

  • The model layer currently focuses on OpenAI-compatible APIs first
  • Built-in tools are list_dir, read_file, write_file, and run_command
  • When session history approaches the context budget, Memnixa compacts older turns into a summary and keeps the recent active tail
  • If the provider returns a direct context overflow error, Memnixa tries to compact and retry automatically
  • memnixa starts the interactive CLI by default
  • memnixa gateway starts HTTP and any enabled channel listeners

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memnixa-0.1.0.tar.gz (86.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memnixa-0.1.0-py3-none-any.whl (86.9 kB view details)

Uploaded Python 3

File details

Details for the file memnixa-0.1.0.tar.gz.

File metadata

  • Download URL: memnixa-0.1.0.tar.gz
  • Upload date:
  • Size: 86.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for memnixa-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1b4c10b2ceacfe9d9fbb47123c6ee332ffbbf31d92a922849f991734e07c00ca
MD5 66e9d2ca731a7ef26276a106ddb05688
BLAKE2b-256 dd18e8dbc9fad1f5276a86e3dbd71514a42c64c4ae053745870a10b185a7ad1a

See more details on using hashes here.

File details

Details for the file memnixa-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: memnixa-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 86.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for memnixa-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ab95bd0608c7042c2404bafaec6914310e2382ded0e3000491199d3d3d62be5d
MD5 dd4480a1ce8b54f16096ee25401f6286
BLAKE2b-256 ba37d5caa19e448008f56e6b65222fbbbba056a0266fada84a5701282d8f345f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page