Skip to main content

A language for mental models

Project description

memo's logo

memo is a new probabilistic programming language for expressing computational cognitive models involving sophisticated recursive reasoning, and for performing fast enumerative inference on such models. memo inherits from the tradition of WebPPL-based Bayesian modeling (see probmods, agentmodels, and problang), but aims to make models easier to write and run by taking advantage of modern programming language techniques and hardware capabilities. As a result, models are often significantly simpler to express (we've seen codebases shrink by a factor of 3x or more), and dramatically faster to execute and fit to data (we've seen speedups of 3,000x or more).

memo stands for: mental modeling, memoized matrix operations, model-expressed-model-optimized, and metacognitive memos.

[!NOTE] memo is currently in "public beta". Though we have many active users, there may be some sharp edges and the language may occasionally change in backward-incompatible ways. We are on track to offer a first stable release of memo in February 2025. For updates on memo's development, we encourage you to subscribe to our low-traffic monthly announcements mailing list here.

Installing memo

  1. memo is based on Python. Before installing memo, make sure you have Python 3.12 or higher installed. You can check this by running python --version. (As of writing, Python 3.12 is the latest version of Python, and memo depends on several of its powerful new features.)
  2. Next, install JAX, a Python module that memo uses to produce fast, differentiable, GPU-enabled code. If you don't have a GPU, then running pip install jax should be enough. Otherwise, please consult the JAX website for installation instructions. You can check if JAX is installed by running import jax in Python.
  3. Finally, install memo by running pip install memo-lang. You can check if memo is installed by running from memo import memo in Python.

[!WARNING] Make sure to install memo-lang, not memo. The latter is a different package, unrelated to this project!

Getting started

Once you have installed memo, take a look at the Memonomicon for a tour of the language, and an example of how to build a model and fit it to data by parallel grid search and/or gradient descent.

This repository also includes several classical examples of recursive reasoning models implemented in memo:

FAQ

When should I use memo rather than Gen or WebPPL?

memo's core competence is fast tabular/enumerative inference on models with recursive reasoning. That covers a wide range of common models: from RSA, to POMDP planning (value iteration = tabular operations), to inverse planning. In general, if you are making nested queries, we recommend using memo.

There are however two particular cases where you may prefer another PPL:

  1. If you are interested specifically in modeling a sophisticated inference scheme, such as MCMC, particle filters, or variational inference, then we recommend trying Gen. (But make sure you really need those tools — the fast enumerative inference provided by memo is often sufficient for many common kinds of models!)
  2. If you are performing inference over an unbounded domain of hypotheses with varied structure, such as programs generated by a grammar, then we recommend trying Gen or WebPPL because memo's tabular enumerative inference can only handle probability distributions with finite support. (But if you are okay with inference over a "truncated" domain, e.g. the top 1,000,000 shortest programs, then memo can do that! Similarly, memo can handle continuous domains by discretizing finely.)

The aforementioned cases are explicitly out of scope for memo. The upshot is that by specializing memo to a particular commonly-used class of models and inference strategies, we are able to produce extremely fast code that is difficult for general-purpose PPLs to produce.

Okay, so how does memo produce such fast code?

memo compiles enumerative inference to JAX array programs, which can be run extremely fast. The reason for this is that array programs are inherently very easy to execute in parallel (by performing operations on each element of the array independently), and modern hardware is particularly good at parallel processing.

What exactly is JAX?

JAX is a library developed by Google that takes Python array programs (similar to NumPy) and compiles them to very fast code that can run on CPUs and GPUs, taking advantage of modern hardware functionality. JAX supports a lot of Google's deep learning, because neural networks involve a lot of array operations. memo compiles your probabilistic models into JAX array programs, and JAX further compiles those array programs into machine code.

Note that JAX has some unintuitive behaviors. We recommend reading this guide to get a sense of its "sharp edges."

I installed memo but importing memo gives an error.

Did you accidentally pip-install the (unrelated) package memo instead of memo-lang?

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memo_lang-0.3.0.tar.gz (22.2 kB view details)

Uploaded Source

Built Distribution

memo_lang-0.3.0-py3-none-any.whl (20.8 kB view details)

Uploaded Python 3

File details

Details for the file memo_lang-0.3.0.tar.gz.

File metadata

  • Download URL: memo_lang-0.3.0.tar.gz
  • Upload date:
  • Size: 22.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for memo_lang-0.3.0.tar.gz
Algorithm Hash digest
SHA256 bf6b86279d90e2b8bfb9ac2921ea7effc4bf2476170e0d7e7d683aae0c7323c7
MD5 a417d3db87cb88b0272488104bd12cb2
BLAKE2b-256 377067d41a93f8dba712254d26b1c4327518a2dddf5031648906c0c2f6025480

See more details on using hashes here.

File details

Details for the file memo_lang-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: memo_lang-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 20.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for memo_lang-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d91808bd7c7d6ba6ede95f770736946fdf7a3fec0b8368e736deb3acf4927734
MD5 185965c9039fbb487a6f0b83e8bd4b91
BLAKE2b-256 7e70599fd3207612be4993707b430f80533961b6ff8ee16a2cd56c177e8e6be8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page