Skip to main content

Bayesian IRT models in Python

Project description

Build Status codecov.io

py-irt

Bayesian IRT models in Python

Overview

This repository includes code for fitting Item Response Theory (IRT) models using variational inference.

At present, the one parameter logistic (1PL) model, aka Rasch model, two parameter logistic model (2PL) and four parameter logistic model (4PL) have been implemented. The user can specify whether vague or hierarchical priors are used. The three-parameter logistic model is in the pipeline and will be added when available.

License

py-irt is licensed under the MIT license.

Installation

py-irt is now available on PyPi!

Pre-reqs

  1. Install PyTorch.
  2. Install Pyro
  3. Install py-irt:
pip install py-irt

OR

Install Poetry

git clone https://github.com/nd-ball/py-irt.git
cd py-irt
poetry install

Usage

Once you install from PyPI, you can use the following command to fit an IRT model on the scored predictions of a dataset. For example, if you were to run py-irt with the 4PL model on the scored predictions of different transformer models on the SQuAD dataset, you'd do this: py-irt train 4pl ~/path/to/dataset/eg/squad.jsonlines /path/to/output/eg/test-4pl/

Please see the EACL 2024 IRT4NLP tutorial which showcases py-irt usage from within Python and not from CLI.

FAQ

  1. What kind of output should I expect on running the command to train an IRT model?

You should see something like this when you run the command given above: image

  1. I tried installing py-irt using pip from PyPI. But when I try to run the command py-irt train 4pl ~/path/to/dataset/eg/squad.jsonlines /path/to/output/eg/test-4pl/, I get an error that says bash: py-irt: command not found. How do I fix this?

The CLI interface was implemented in PyPi version 0.2.1. If you are getting this error try updating py-irt:

pip install --upgrade py-irt

Alternatively, you can install the latest version from github:

git clone https://github.com/nd-ball/py-irt.git
cd py-irt
mv ~/py-irt/py_irt/cli.py ~/py-irt/
python cli.py train 4pl ~/path/to/dataset/eg/squad.jsonlines /path/to/output/eg/test-4pl/
  1. How do I evaluate a trained IRT model?

If you have already trained an IRT model you can use the following command:

py-irt evaluate 4pl ~/path/to/data/best_parameters.json ~/path/to/data/test_pairs.jsonlines /path/to/output/eg/test-4pl/

Where test_pairs.jsonlines is a jsonlines file with the following format:

{"subject_id": "ken", "item_id": "q1"}
{"subject_id": "ken", "item_id": "q2"}
{"subject_id": "burt", "item_id": "q1"}
{"subject_id": "burt", "item_id": "q3"}

If you would like to both train and evaluate a model you can use the following command:

py-irt train-and-evaluate 4pl ~/path/to/data/squad.jsonlines /path/to/output/eg/test-4pl/

By default this will train a model with 90% of the provided data and evaluate with the remaining 10%. To change this behavior you can add --evaluation all to the command above. The model will train and evaluate against all of the data.

Citations

If you use this code, please consider citing the following papers:

@inproceedings{lalor2019emnlp,
  author    = {Lalor, John P and Wu, Hao and Yu, Hong},
  title     = {Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds},
  year      = {2019},
  booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing},
}
@inproceedings{rodriguez2021evaluation,
  title={Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards?},
  author={Rodriguez, Pedro and Barrow, Joe and Hoyle, Alexander Miserlis and Lalor, John P and Jia, Robin and Boyd-Graber, Jordan},
  booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
  pages={4486--4503},
  year={2021}
}

Implementation is based on the following paper:

@article{natesan2016bayesian,
  title={Bayesian prior choice in IRT estimation using MCMC and variational Bayes},
  author={Natesan, Prathiba and Nandakumar, Ratna and Minka, Tom and Rubright, Jonathan D},
  journal={Frontiers in psychology},
  volume={7},
  pages={1422},
  year={2016},
  publisher={Frontiers}
}

Contributing

This is research code. Pull requests and issues are welcome!

Questions?

Let me know if you have any requests, bugs, etc.

Email: john.lalor@nd.edu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_irt-0.7.1.tar.gz (26.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_irt-0.7.1-py3-none-any.whl (46.9 kB view details)

Uploaded Python 3

File details

Details for the file py_irt-0.7.1.tar.gz.

File metadata

  • Download URL: py_irt-0.7.1.tar.gz
  • Upload date:
  • Size: 26.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.6.87.2-microsoft-standard-WSL2

File hashes

Hashes for py_irt-0.7.1.tar.gz
Algorithm Hash digest
SHA256 915cf896064f54293bd76c2b4477ebe587fef7a4e0316718d9f687e5adf9651d
MD5 26264278353eb5ea80a17380806b232e
BLAKE2b-256 2158fa85d0764863ac4ade8c9ef4ce28430fac0e57617c14cfaf1244f88ab7cf

See more details on using hashes here.

File details

Details for the file py_irt-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: py_irt-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 46.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.6.87.2-microsoft-standard-WSL2

File hashes

Hashes for py_irt-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d73c356d0dbacfcd6f022500bbc076f59c4bd924bba34ef676171d726cc86b00
MD5 55b808663dadce68900bc3f2f6df302e
BLAKE2b-256 dd00a7dd74bf3539c04f66ab744a6a0023cb4de94bbea6e73a0e80960f0b7975

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page