Bayesian model management
Project description
bambi model management (bammm)
Estimating complex models can take time. bammm allows one to save estimated models and to later load them as required (without the need to re-estimate them).
Currently it is build to only work with (generalized) linear models in python that have been esimtated using bambi but in essence it can be used to save samples from any PyMC model.
Functionality
- Creation and mangement of database of local models (simple JSON file)
- Model (dependent variable, fixed and random effects) and estimation parameters (no chains, samples/chain, cores) specification.
- Automatic build of regression equation (input to bambi) using
mm.geerate_equation()
- Model estimation using
mm.estimate_lmm()
- Saving (
mm.save_model_info()
) and updating (mm.update_model_entry()
) of estimated model information. - Loading previously esimtated model (
mm.estimate_lmm()
)
Usage
Setup
Install using pip:
pip install bammm-ozika
Load
import bammm.bammm as mm
Full example script can be found here.
Generate your local JSON database
Before first use on a project a local database needs to be generated. Specify a path for the database:
os.chdir(root_dir)
db_path = os.path.join( "demo", "my_project", "databse_name.json")
Initialize database:
mm.models_init(db_path)
Fitting a new linear model
Load database
models = json.load(open(db_path, "r"))
Specify linear model. The idea is to have a specific model name and a "group/family" for each model. bammm
will create a folder (model_family) and save data of individual models in there using pickle
.
# load data that come with bambi
data = bmb.load_data("sleepstudy")
model_family = "sleepstudy" # this will be used as a folder name to host the models
model_identifier = "maximum_model"
# dependent variable
mod["lmm"]["dep_var"] = "Reaction" # Reaction time
# fixed effects
mod["lmm"]["fxeff"] = ["Days"] # longitudinal data set
# random effects
mod["lmm"]["rneff"] = ["Days|Subject"]
# build equation
mod["lmm"]["eq"] = mm.generate_equation(mod["lmm"]["dep_var"], mod["lmm"]["fxeff"], mod["lmm"]["rneff"])
Specify estimation parameters.
# fitting information
mod["est"]["nchains"] = 2
mod["est"]["nsamples"] = 4000
mod["est"]["ncores"] = 2 # number of cores to be useds in fitting
Specify paths and create relevant strings.
mod = mm.prepare_fit(mod, model_family, model_identifier, models_path)
# save model (it's a good idea to load/save the DB often, especially if one runs multiple models at the same time)
models[mod["name"]] = mod
mm.save_model_info(models, db_path)
Estimate model. This also automatically saves the data to the location specified in models_path
mod, results, m = mm.estimate_lmm(mod, data, override=0)
mm.update_model_entry(models, mod, db_path)
Load previously fitted model
It is rather simple
# load database
models = json.load(open(db_path, "r"))
model_name = "sleepstudy_maximum_model_2_4000" # model estimated above
mod = models[model_name]
# load model
mod, results, m = mm.estimate_lmm(mod, [], override=0)
This might not be the best way to achieve model saving/loading, but it seems to work well (at least for the poor practitioner of statistics that is me).
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file bammm-ozika-0.0.8.tar.gz
.
File metadata
- Download URL: bammm-ozika-0.0.8.tar.gz
- Upload date:
- Size: 4.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fe4f3ca740c52ff1006a0dc85bc57aef17f1e34a987d68696e29f6d085010cd0 |
|
MD5 | e3810edd4ac788c0a4b68705cd8fa0d7 |
|
BLAKE2b-256 | f6786930abbfb50312d3f6dbbf6dc13ddb24cbd1bff9096d8fa4456f132ee526 |
File details
Details for the file bammm_ozika-0.0.8-py3-none-any.whl
.
File metadata
- Download URL: bammm_ozika-0.0.8-py3-none-any.whl
- Upload date:
- Size: 5.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e57f225f0511a133966e749976d23a18b8b78c1dd88f5ed4de8246a36cc27ea8 |
|
MD5 | 7ffd20cb392f5e87a1e7ec25165dbea2 |
|
BLAKE2b-256 | dbded49acefb8de1fb5bd90576d07a5c2937b04a7efa97059ecd58a17d4a3110 |