Skip to main content

A package for memos

Project description

English | 简体中文 | 日本語

memos-search

I changed the name to Pensieve because Memos was already taken.

Pensieve (previously named Memos)

Pensieve is a privacy-focused passive recording project. It can automatically record screen content, build intelligent indices, and provide a convenient web interface to retrieve historical records.

This project draws heavily from two other projects: one called Rewind and another called Windows Recall. However, unlike both of them, Pensieve allows you to have complete control over your data, avoiding the transfer of data to untrusted data centers.

Features

  • 🚀 Simple installation: just install dependencies via pip to get started
  • 🔒 Complete data control: all data is stored locally, allowing for full local operation and self-managed data processing
  • 🔍 Full-text and vector search support
  • 🤖 Integrates with Ollama, using it as the machine learning engine for Pensieve
  • 🌐 Compatible with any OpenAI API models (e.g., OpenAI, Azure OpenAI, vLLM, etc.)
  • 💻 Supports Mac and Windows (Linux support is in development)
  • 🔌 Extensible functionality through plugins

Quick Start

memos-installation

[!IMPORTANT]
It seems that not all versions of Python's sqlite3 library support enable_load_extension. However, I'm not sure which environments or Python versions might encounter this issue. I use conda to manage Python, and Python installed via conda works fine on macOS, Windows x86, and Ubuntu 22.04.

Please ensure the following command works in your Python environment:

import sqlite3
print(sqlite3.sqlite_version)

If you find that this does not work properly, you can install miniconda to manage your Python environment. Alternatively, check the current issue list to see if others have encountered the same problem.

1. Install Pensieve

pip install memos

2. Initialize

Initialize the pensieve configuration file and sqlite database:

memos init

Data will be stored in the ~/.memos directory.

3. Start the Service

memos enable
memos start

This command will:

  • Begin recording all screens
  • Start the Web service
  • Set the service to start on boot

4. Access the Web Interface

Open your browser and visit http://localhost:8839

init page

Mac Permission Issues

On Mac, Pensieve needs screen recording permission. When the program starts, Mac will prompt for screen recording permission - please allow it to proceed.

User Guide

Using the Appropriate Embedding Model

1. Model Selection

Pensieve uses embedding models to extract semantic information and build vector indices. Therefore, choosing an appropriate embedding model is crucial. Depending on the user's primary language, different embedding models should be selected.

2. Adjust Memos Configuration

Open the ~/.memos/config.yaml file with your preferred text editor and modify the embedding configuration:

embedding:
  use_local: true
  model: jinaai/jina-embeddings-v2-base-en   # Model name used
  num_dim: 768                               # Model dimensions
  use_modelscope: false                      # Whether to use ModelScope's model

3. Restart Memos Service

memos stop
memos start

The first time you use the embedding model, Pensieve will automatically download and load the model.

4. Rebuild Index

If you switch the embedding model during use, meaning you have already indexed screenshots before, you need to rebuild the index:

memos reindex --force

The --force parameter indicates rebuilding the index table and deleting previously indexed screenshot data.

Using Ollama for Visual Search

By default, Pensieve only enables the OCR plugin to extract text from screenshots and build indices. However, this method significantly limits search effectiveness for images without text.

To achieve more comprehensive visual search capabilities, we need a multimodal image understanding service compatible with the OpenAI API. Ollama perfectly fits this role.

Important Notes Before Use

Before deciding to enable the VLM feature, please note the following:

  1. Hardware Requirements

    • Recommended configuration: NVIDIA graphics card with at least 8GB VRAM or Mac with M series chip
    • The minicpm-v model will occupy about 5.5GB of storage space
    • CPU mode is not recommended as it will cause severe system lag
  2. Performance and Power Consumption Impact

    • Enabling VLM will significantly increase system power consumption
    • Consider using other devices to provide OpenAI API compatible model services

1. Install Ollama

Visit the Ollama official documentation for detailed installation and configuration instructions.

2. Prepare the Multimodal Model

Download and run the multimodal model minicpm-v using the following command:

ollama run minicpm-v "Describe what this service is"

This command will download and run the minicpm-v model. If the running speed is too slow, it is not recommended to use this feature.

3. Configure Pensieve to Use Ollama

Open the ~/.memos/config.yaml file with your preferred text editor and modify the vlm configuration:

vlm:
  endpoint: http://localhost:11434  # Ollama service address
  modelname: minicpm-v              # Model name to use
  force_jpeg: true                  # Convert images to JPEG format to ensure compatibility
  prompt: Please describe the content of this image, including the layout and visual elements  # Prompt sent to the model

Use the above configuration to overwrite the vlm configuration in the ~/.memos/config.yaml file.

Also, modify the default_plugins configuration in the ~/.memos/plugins/vlm/config.yaml file:

default_plugins:
- builtin_ocr
- builtin_vlm

This adds the builtin_vlm plugin to the default plugin list.

4. Restart Pensieve Service

memos stop
memos start

After restarting the Pensieve service, wait a moment to see the data extracted by VLM in the latest screenshots on the Pensieve web interface:

image

If you do not see the VLM results, you can:

  • Use the command memos ps to check if the Pensieve process is running normally
  • Check for error messages in ~/.memos/logs/memos.log
  • Confirm whether the Ollama model is loaded correctly (ollama ps)

Full Indexing

Pensieve is a compute-intensive application. The indexing process requires the collaboration of OCR, VLM, and embedding models. To minimize the impact on the user's computer, Pensieve calculates the average processing time for each screenshot and adjusts the indexing frequency accordingly. Therefore, not all screenshots are indexed immediately by default.

If you want to index all screenshots, you can use the following command for full indexing:

memos scan

This command will scan and index all recorded screenshots. Note that depending on the number of screenshots and system configuration, this process may take some time and consume significant system resources. The index construction is idempotent, and running this command multiple times will not re-index already indexed data.

Privacy and Security

During the development of Pensieve, I closely followed the progress of similar products, especially Rewind and Windows Recall. I greatly appreciate their product philosophy, but they do not do enough in terms of privacy protection, which is a concern for many users (or potential users). Recording the screen of a personal computer may expose extremely sensitive private data, such as bank accounts, passwords, chat records, etc. Therefore, ensuring that data storage and processing are completely controlled by the user to prevent data leakage is particularly important.

The advantages of Pensieve are:

  1. The code is completely open-source and easy-to-understand Python code, allowing anyone to review the code to ensure there are no backdoors.
  2. Data is completely localized, all data is stored locally, and data processing is entirely controlled by the user. Data will be stored in the user's ~/.memos directory.
  3. Easy to uninstall. If you no longer use Pensieve, you can close the program with memos stop && memos disable, then uninstall it with pip uninstall memos, and finally delete the ~/.memos directory to clean up all databases and screenshot data.
  4. Data processing is entirely controlled by the user. Pensieve is an independent project, and the machine learning models used (including VLM and embedding models) are chosen by the user. Due to Pensieve' operating mode, using smaller models can also achieve good results.

Of course, there is still room for improvement in terms of privacy, and contributions are welcome to make Pensieve better.

Other Noteworthy Content

About Storage Space

Pensieve records the screen every 5 seconds and saves the original screenshots in the ~/.memos/screenshots directory. Storage space usage mainly depends on the following factors:

  1. Screenshot Data:

    • Single screenshot size: about 40-400KB (depending on screen resolution and display complexity)
    • Daily data volume: about 400MB (based on 10 hours of usage, single screen 2560x1440 resolution)
    • Multi-screen usage: data volume increases with the number of screens
    • Monthly estimate: about 8GB based on 20 working days

    Screenshots are deduplicated. If the content of consecutive screenshots does not change much, only one screenshot will be retained. The deduplication mechanism can significantly reduce storage usage in scenarios where content does not change frequently (such as reading, document editing, etc.).

  2. Database Space:

    • SQLite database size depends on the number of indexed screenshots
    • Reference value: about 2.2GB of storage space after indexing 100,000 screenshots

About Power Consumption

Pensieve requires two compute-intensive tasks by default:

  • One is the OCR task, used to extract text from screenshots
  • The other is the embedding task, used to extract semantic information and build vector indices

Resource Usage

  • OCR Task: Executed using the CPU, and optimized to select the OCR engine based on different operating systems to minimize CPU usage

  • Embedding Task: Intelligently selects the computing device

    • NVIDIA GPU devices prioritize using the GPU
    • Mac devices prioritize using Metal GPU
    • Other devices use the CPU

Performance Optimization Strategy

To avoid affecting users' daily use, Pensieve has adopted the following optimization measures:

  • Dynamically adjust the indexing frequency, adapting to system processing speed
  • Automatically reduce processing frequency when on battery power to save power

Development Guide

Peeling the First Layer of the Onion

In fact, after Pensieve starts, it runs three programs:

  1. memos serve starts the web service
  2. memos record starts the screenshot recording program
  3. memos watch listens to the image events generated by memos record and dynamically submits indexing requests to the server based on actual processing speed

Therefore, if you are a developer or want to see the logs of the entire project running more clearly, you can use these three commands to run each part in the foreground instead of the memos enable && memos start command.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memos-0.20.0.tar.gz (21.7 MB view details)

Uploaded Source

Built Distribution

memos-0.20.0-py3-none-any.whl (21.8 MB view details)

Uploaded Python 3

File details

Details for the file memos-0.20.0.tar.gz.

File metadata

  • Download URL: memos-0.20.0.tar.gz
  • Upload date:
  • Size: 21.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for memos-0.20.0.tar.gz
Algorithm Hash digest
SHA256 2c7ffb5b6b1ae5e2a76f5657f8b588807a427b5d2d6dd565ba262474dbb7ca30
MD5 a71a8de3f3de7f3a53b5fdfa5ea416b4
BLAKE2b-256 6f9ab2ccc2bfbfaba389bf18b4ade2ef187bac6b761213dbec38f250251e53a6

See more details on using hashes here.

Provenance

The following attestation bundles were made for memos-0.20.0.tar.gz:

Publisher: release.yml on arkohut/pensieve

Attestations:

File details

Details for the file memos-0.20.0-py3-none-any.whl.

File metadata

  • Download URL: memos-0.20.0-py3-none-any.whl
  • Upload date:
  • Size: 21.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for memos-0.20.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ecea332175fcc579e03e2073c6651bc8e168de6ce783f797cfb157d39484694b
MD5 6a4c9e5f2eba68f248ea100485c68173
BLAKE2b-256 323ea61138d208787b60502371464acf1423db233101702e01b5745feafdadc2

See more details on using hashes here.

Provenance

The following attestation bundles were made for memos-0.20.0-py3-none-any.whl:

Publisher: release.yml on arkohut/pensieve

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page