Skip to main content

A selfhosted service for indexing and searching personal web history.

Project description

Memoria Splash Logo

Memoria

A selfhosted service for indexing and searching personal web history.

Container Image CI Build and Publish Releases PyPI - Version

Memoria ingests URLs from browsing history, then scrapes and indexes the web content to create a personalized search engine.

Sections
🚀 § Running Memoria
⚙️ § Configuration
🧩 § Plugins

Other Documentation
📃 Changelog
📦 Building
🤝 Contributing
⚖️ License
📑 Plugin Development

Running Memoria

To run Memoria you will need an Elasticsearch instance. The "Running With Containers" example will start one for you, or you can deploy one manually and configure Memoria to connect to it. Once Memoria is running via one of the methods below you can access the web interface at http://localhost/.

Running With Python
# Install from PyPI:
python3 -m pip install memoria_search
# Or from source code:
python3 -m pip install .

# Run:
python3 -m memoria.web --port 80

# Or run from source code without installing (you may need to install some dependencies):
PYTHONPATH=./src python -m memoria.web --port 80

Notes:

  • Your distribution may require that you create a virtual environment to install Python packages.
  • Memoria is currently designed to run under Python 3.12. Your mileage may vary attempting to run under Python 3.11.
Running With Containers

Self-contained Compose (including an Elasticsearch instance):

# With Docker Compose or Podman Compose:
podman-compose --profile elasticsearch up

# Cleanup:
podman-compose down --volumes

Single Docker container (for use with an existing Elasticsearch instance):

# Build or pull
podman build -t ghcr.io/sidneys1/memoria .
podman pull ghcr.io/sidneys1/memoria

# With plain Docker or Podman
podman run --name memoria -e MEMORIA_ELASTIC_HOST=http://hostname:9200/ -p 80 ghcr.io/sidneys1/memoria

# Cleanup:
podman container rm memoria
podman image rm ghcr.io/sidneys1/memoria

Note that Podman commands may require sudo to run, or that you configure your Podman environment to run rootless.

Advanced Container Deployment

You can deploy Memoria as a container. The provided Containerfile builds a lightweight image based on python:3.12-alpine, which runs Memoria under Uvicorn on the exposed port 80.

podman build -t sidneys1/memoria .

You can also deploy Memoria with Docker Compose or Podman Compose (as shown here).

The file compose.yaml shows the most basic Compose strategy, building and launching a Memoria container. You can use Memoria with an existing Elasticsearch instance like so[^1]:

# You may want to use the `memoria_elastic_password` secret by uncommenting the
# relevant sections of `compose.yaml` and running:
printf 'my-password-here' | podman secret create memoria_elastic_password -

export ELASTIC_HOST=http://hostname:9200/
podman-compose up --build 

[^1]: See §Configuration for more environment variables and configuration options.

A Compose profile named elasticsearch is also provided that will additionally launch an Elasticsearch container.

# To start self-contained. See notes below regarding default credentials.
podman-compose up --build --profile elasticsearch

[!NOTE] Currently the only way to import browser history is by uploading a browser history database on the Settings page. More import strategies are coming soon™.

Configuration

Options

Memoria has several deployment configuration options that control overall behavior. These can be set via environment variables or container secrets. The following configuration options are provided:

Name Description Default
Importing downloader The downloader plugin§ to use AiohttpDownloader
extractor The extractor plugin§ to use HtmlExtractor
filter_stack A list of filter plugins§ to use ["HtmlContentFinder"]
import_threads The maximum number of processes to use to download history items

$\frac{cpus}{2}$[^2]

Databases database_uri Connection URI to the Memoria database sqlite+aiosqlite:///./data/memoria.db
elastic_host Elasticsearch connection URI http://elasticsearch:9200
elastic_user Elasticsearch Authentication elastic
elastic_password None

[^2]: Or 1 if CPU count cannot be determined.

Any of these settings can be configured with uppercase environment variables prefixed with MEMORIA_ (e.g., MEMORIA_ELASTIC_PASSWORD). Additionally, settings can be read from files from /run/secrets[^3], which will take precedence over any environment variables. For example, to set elastic_password with a Docker or Podman secret, you can:

printf 'my-password-here' | podman secret create memoria_elastic_password -
podman run --name memoria --secret memoria_elastic_password -p 80 sidneys1/memoria

[^3]: The secrets directory can be overridden with the SECRETS_DIR environment variable.

Plugins

Memoria utilizes a plugin architecture that allows for different methods of downloading URLs, transforming the downloaded content, and extracting indexable plain text from the content. Selecting which plugins Memoria will use for web content retrieval and processing is described in §Configuration.

There are currently three types of Memoria Plugins used during web content retrieval and processing:

  • Downloaders
    Downloaders are responsible for accessing a URL and retrieving its content from the internet. They can provide this content in many different formats to the next plugin in the stack. The most basic Downloaders (like the built-in default, AiohttpDownloader) only support downloading raw HTML to provide to the remaining plugins.

  • Filters
    Filters transform input from the previous plugin in the stack (either the Downloader or another Filter). They can change the content format or modify it in place.

    By default Memoria uses the built in HtmlContentFinder plugin to remove extraneous HTML elements and prune the input to a single <main>, <article>, or <... id="content"> element (if one exists).

  • Extractors
    Extractors are the last plugin to run, and are responsible for converting the input from the previous plugin (either the Downloader or the last Filter) into plain text that will be stored in Elasticsearch for indexing and searching.

    By default Memoria uses the built in HtmlExtractor plugin to convert the input HTML into plain text. It also searches the original downloaded HTML (before any potential modification by Filter plugins) for <meta ...> values that could be used to enrich the Elasticsearch document, such as "author" or "description".

Other types of plugins:

  • Scraping Rule Filters
    Scraping rule filter plugins allow the Scraping Rules in the Settings UI to be extended with new functionality. These filters help determine which history URLs will be retrieved and scraped.

[!TIP] See the 📑 Plugin Development guide for information on developing your own Memoria plugins.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoria_search-0.2.tar.gz (2.1 MB view details)

Uploaded Source

Built Distribution

memoria_search-0.2-py3-none-any.whl (2.1 MB view details)

Uploaded Python 3

File details

Details for the file memoria_search-0.2.tar.gz.

File metadata

  • Download URL: memoria_search-0.2.tar.gz
  • Upload date:
  • Size: 2.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for memoria_search-0.2.tar.gz
Algorithm Hash digest
SHA256 e74cccd7cf82aa996b81a869e59ddf7c553ac86280425114342b4b4cae68d2d5
MD5 32d434c2c471c860c9e05189ff098f5c
BLAKE2b-256 8976439fb7dfb575af98dc69c247f4f45c06e5bef085d92a1e0c57d822eb48dd

See more details on using hashes here.

File details

Details for the file memoria_search-0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for memoria_search-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fe52f041fe6773727de88c6618b5fef694de2ac73ad990bfc504a52ea72cec5d
MD5 20e90ec983d631d510af712edd7a42cb
BLAKE2b-256 3d900b3717e50b9fb5df77cf6b1725a11ed583a14ec9f55c63f6e1ebb98a3cdc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page