Skip to main content

A language for mental models

Project description

memo's logo

memo is a new probabilistic programming language for expressing computational cognitive models involving recursive reasoning about reasoning, and for performing fast enumerative inference on such models. memo inherits from the tradition of WebPPL-based Bayesian modeling (see probmods, agentmodels, and problang), but aims to make models easier to write and run by taking advantage of modern programming language techniques and hardware capabilities (including GPUs!). As a result, models are often significantly simpler to express (we've seen codebases shrink by a factor of 3x or more), and dramatically faster to execute and fit to data (we've seen speedups of 3,000x or more).

memo stands for: mental modeling, memoized matrix operations, model-expressed-model-optimized, and metacognitive memos.

[!NOTE] For updates on memo's development, we encourage you to subscribe to our low-traffic monthly announcements mailing list here.

Installing memo

  1. memo is based on Python. Before installing memo, make sure you have Python 3.12 or higher installed. You can check this by running python --version.
  2. Next, install JAX, a Python module that memo uses to produce fast, differentiable, GPU-enabled code. If you don't have a GPU, then running pip install jax should be enough. Otherwise, please consult the JAX website for installation instructions. You can check if JAX is installed by running import jax in Python.
  3. Finally, install memo by running pip install memo-lang. You can check if memo is installed by running from memo import memo in Python.

[!WARNING] Make sure to install memo-lang, not memo. The latter is a different package, unrelated to this project!

Getting started

Once you have installed memo, take a look at the Memonomicon for a tour of the language, and an example of how to build a model and fit it to data by parallel grid search and/or gradient descent. You can also watch a video tutorial that covers similar material.

This repository also includes over a dozen classic examples of recursive reasoning models implemented in memo, which you can find in the demo directory.

For background on the theory of decision making under uncertainty, e.g. MDPs and POMDPs, we recommending consulting Decision Making Under Uncertainty as a reference. You can read the entire book for free online here.

For background on Bayesian models of theory of mind, we recommend consulting chapter 14 of Bayesian Models of Cognition as a reference. You can read the published version here and a PDF preprint here.

FAQ

When should I use memo rather than Gen or WebPPL?

memo's core competence is fast tabular/enumerative inference on models with recursive reasoning about reasoning. That covers a wide range of common models: from RSA, to POMDP planning (value iteration = tabular operations), to inverse planning. In general, if you are making nested queries, we recommend using memo.

There are however two particular cases where you may prefer another PPL:

  1. If you are interested specifically in modeling a sophisticated inference scheme, such as MCMC, particle filters, or variational inference, then we recommend trying Gen. (But make sure you really need those tools — the fast enumerative inference provided by memo is often sufficient for many common kinds of models!)
  2. If you are performing inference over an unbounded domain of hypotheses with varied structure, such as programs generated by a grammar, then we recommend trying Gen or WebPPL because memo's tabular enumerative inference can only handle probability distributions with finite support. (But if you are okay with inference over a "truncated" domain, e.g. the top 1,000,000 shortest programs, then memo can do that! Similarly, memo can handle continuous domains by discretizing finely.)

The aforementioned cases are explicitly out of scope for memo. The upshot is that by specializing memo to a particular commonly-used class of models and inference strategies, we are able to produce extremely fast code that is difficult for general-purpose PPLs to produce.

Okay, so how does memo produce such fast code?

memo compiles enumerative inference to JAX array programs, which can be run extremely fast. The reason for this is that array programs are inherently very easy to execute in parallel (by performing operations on each element of the array independently), and modern hardware is particularly good at parallel processing.

What exactly is JAX?

JAX is a library developed by Google that takes Python array programs (similar to NumPy) and compiles them to very fast code that can run on CPUs and GPUs, taking advantage of modern hardware functionality. JAX supports a lot of Google's deep learning, because neural networks involve a lot of array operations. memo compiles your probabilistic models into JAX array programs, and JAX further compiles those array programs into machine code.

Note that JAX has some unintuitive behaviors. We recommend reading this guide to get a sense of its "sharp edges."

I installed memo but importing memo gives an error.

Did you accidentally pip-install the (unrelated) package memo instead of memo-lang?

I installed memo on my Mac, but running models gives a weird JAX error about "AVX".

The common cause of this is that you have a modern Mac (with an ARM processor), but an old version of Python (compiled for x86). We recommend the following installation strategy on ARM-based Macs:

  1. Do not use conda.
  2. Install Homebrew. Make sure you have the ARM version of brew: brew --prefix should be /opt/homebrew, and brew config should say Rosetta 2: false. If this is not the case, you have the x86 version of brew, which you should uninstall.
  3. Install Python via brew install python3. Ensure that python3 --version works as expected, and that which python3 points to something in /opt/homebrew/bin/.
  4. In your project directory, create a virtual environment via python3 -m venv venv.
  5. Activate the virtual environment via . venv/bin/activate. Your prompt should now begin with (venv).
  6. Install memo via pip install memo-lang.
Can I run memo on Apple's "metal" platform?

Yes! See this issue for details: https://github.com/kach/memo/issues/66

Some of my output array's dimensions are unexpectedly of size 1.

memo attempts to minimize redundant computation. If the output of your model doesn't depend on an input axis, then instead of repeating the computation along that axis, memo will set that axis to size 1. The idea is that broadcasting will keep the array compatible with downstream computations.

As an example, consider the following models:

X = np.arange(10)

@memo
def f[a: X, b: X]():
    return a
f().shape  # (10, 1) because output is independent of b

@memo
def f[a: X, b: X]():
    return b
f().shape  # (1, 10) because output is independent of a

@memo
def f[a: X, b: X]():
    return a + b
f().shape  # (10, 10) because output depends on a and b

@memo
def f[a: X, b: X]():
    return 999
f().shape  # (1, 1) because output depends on neither a nor b
How can I visualize what's going on with my model in "comic-book" format?

Use @memo(save_comic="filename") instead of just @memo. memo will produce a Graphviz filename.dot file that you can render online. If you have Graphviz installed, memo will also automatically render a filename.png file for you.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memo_lang-0.4.1.tar.gz (26.9 kB view details)

Uploaded Source

Built Distribution

memo_lang-0.4.1-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file memo_lang-0.4.1.tar.gz.

File metadata

  • Download URL: memo_lang-0.4.1.tar.gz
  • Upload date:
  • Size: 26.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for memo_lang-0.4.1.tar.gz
Algorithm Hash digest
SHA256 edb00748fedec8e1a1b99f6fa9dd371330c01db1bace864fc928074cd06c602a
MD5 3f93ecbb2327aaece0444f302a33a67c
BLAKE2b-256 07207df0d5bdd2f91708fb7fc95a178515d491fa243e4cae57eacf8edf00e0a9

See more details on using hashes here.

File details

Details for the file memo_lang-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: memo_lang-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 24.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for memo_lang-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e90982a984537e635967147eb08e0b3ad56ad4328ae7ef3fbe1adbabd29ef2b3
MD5 e5630bce54e260947a7c36ecc3819e08
BLAKE2b-256 e9afb9a58a25b894f9dfb8d0be4df01e2a11c9de32c82116b2c403ca03dae757

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page