Skip to main content

A private AI mirror for personal reflection - based on Matthew McConaughey's Aspirational Self concept

Project description

Memoria - GitHub-Based User Memory System

Extracts user preferences and patterns from GitHub data to build a personalized memory profile.

Purpose

Memoria analyzes a user's GitHub activity to understand:

  • Programming language preferences
  • Framework and library choices
  • Commit patterns and work style
  • Project domains and interests
  • Collaboration patterns
  • Code style preferences

This data is stored as "memories" that can be used for:

  • Personalized recommendations
  • Automated decision-making
  • Context-aware assistance
  • Predictive suggestions

Architecture

memoria/
├── src/
│   ├── github_fetcher.py    # Fetch GitHub user data
│   ├── pattern_analyzer.py   # Analyze code patterns
│   ├── preference_extractor.py # Extract user preferences
│   └── memory_builder.py    # Build memory profile
├── data/
│   ├── users/               # Per-user memory profiles
│   │   └── {username}.json
│   ├── patterns.json         # Common coding patterns
│   └── preferences.json      # Preference taxonomy
├── memory/
│   ├── short_term/          # Recent context (session-based)
│   └── long_term/          # Persistent preferences
├── tests/
│   └── test_memoria.py      # Unit tests
├── requirements.txt
└── README.md

Memory Types

Short-Term Memory

  • Current working directory
  • Active files being edited
  • Recent commands/actions
  • Current task context
  • Lifespan: ~1-4 hours

Long-Term Memory

  • Language preferences (Python, Rust, Go, etc.)
  • Framework choices (React, Next.js, Svelte, etc.)
  • Testing frameworks (pytest, Jest, etc.)
  • CI/CD tools (GitHub Actions, CircleCI, etc.)
  • Cloud providers (AWS, GCP, Azure)
  • Editor preferences (VS Code, Vim, etc.)
  • Lifespan: Persistent until explicitly updated

Usage

# Analyze a GitHub user and build memory
python3 -m src.memory_builder --username jasperan

# Query user memory
python3 -m src.memory_query --username jasperan --query language_preference

# Update specific memory entry
python3 -m src.memory_update --username jasperan --key language_preference --value Rust

# Export memory profile
python3 -m src.memory_export --username jasperan --format json

Data Sources

GitHub API Endpoints

  • /users/{username} - Basic profile
  • /repos - Repository list
  • /repos/{owner}/{repo}/languages - Language usage
  • /repos/{owner}/{repo}/commits - Commit patterns
  • /repos/{owner}/{repo}/contents/.editorconfig - Editor config
  • /repos/{owner}/{repo}/contents/{path} - File analysis

Preference Taxonomy

Language Preferences

  • Primary languages (by LOC)
  • Secondary languages (by repo count)
  • New languages learning (recent repos)

Framework Preferences

  • Frontend: React, Vue, Svelte, Angular
  • Backend: Express, Django, FastAPI, Flask
  • Database: PostgreSQL, MongoDB, Redis

Tool Preferences

  • Version control: Git, GitHub actions
  • Testing: pytest, jest, mocha
  • Deployment: Docker, Kubernetes
  • Package managers: npm, cargo, pip, go

Style Preferences

  • Code formatting (black, prettier, rustfmt)
  • Linting (eslint, pylint, clippy)
  • Naming conventions (snake_case, camelCase, etc.)

Example Memory Profile

{
  "user_id": "jasperan",
  "last_updated": "2026-02-11T10:00:00Z",
  "languages": {
    "python": 0.45,
    "rust": 0.35,
    "javascript": 0.15,
    "go": 0.05
  },
  "frameworks": {
    "frontend": ["Next.js", "React"],
    "backend": ["FastAPI", "Django"],
    "database": ["PostgreSQL", "Redis"]
  },
  "tools": {
    "testing": ["pytest", "jest"],
    "deployment": ["Docker", "GitHub Actions"],
    "editor": "VS Code"
  },
  "patterns": {
    "commit_frequency": "medium",
    "repo_size": "small_to_medium",
    "collaboration_style": "mixed"
  },
  "domains": [
    "machine_learning",
    "web_development",
    "ai_agents",
    "trading_bots"
  ]
}

API

Memory Query API

from src.memory import Memory

memory = Memory.load("jasperan")

# Query preferences
language = memory.get("languages.primary")  # "python"
framework = memory.get("frameworks.backend")  # ["FastAPI", "Django"]

# Suggest based on memory
suggestion = memory.suggest("testing_framework")  # "pytest"

# Update memory
memory.set("tools.editor", "Neovim")
memory.save()

Integration with OpenClaw

Memoria can be used by OpenClaw agents to:

  • Make personalized suggestions
  • Choose appropriate tools for tasks
  • Understand user's coding style
  • Predict preferences based on patterns
# In an OpenClaw agent
from memoria import Memory

memory = Memory.load(user_id)

# Suggest testing framework
if memory.get("languages.primary") == "python":
    return "Use pytest"
elif memory.get("languages.primary") == "rust":
    return "Use cargo test"

Privacy

  • Memory data stored locally (memory/long_term/)
  • GitHub API calls authenticated (requires GITHUB_TOKEN)
  • User can delete memory profiles
  • No data shared with external services

PROJECT_UPDATES

See PROJECT_UPDATES.md for recent changes.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoria_ai-0.1.0.tar.gz (26.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memoria_ai-0.1.0-py3-none-any.whl (17.2 kB view details)

Uploaded Python 3

File details

Details for the file memoria_ai-0.1.0.tar.gz.

File metadata

  • Download URL: memoria_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 26.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for memoria_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 05a81da92c188c499e091411949b000232f3ac5f36440da4998fb5d4be991267
MD5 6ffdf029d3f8c1fe50087be421d8458a
BLAKE2b-256 222111d2d1a722e8eae79192b7c8beebcb5378c8c1c0680b661a6d4ff291b553

See more details on using hashes here.

File details

Details for the file memoria_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: memoria_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 17.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for memoria_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d46214f42cb8862bd93560693c76a684b4615c8e137305ba53e37f3ee81fde28
MD5 43f67fbe7c52a4db241b8dcac520fb6b
BLAKE2b-256 7c1016190e5995c01f6123a5e37f74362553b0dc6cb1f3bf955b4da7b7594e2e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page