Skip to main content

Deep learning-based estimation and inference for item response theory models.

Project description

DeepIRTools

Deep Learning-Based Estimation and Inference for Item Response Theory Models

PyPI version License: GPL v3 Documentation Status python_sup

DeepIRTools is a small Pytorch-based Python package that uses scalable deep learning methods to fit a number of different confirmatory and exploratory latent factors models, with a particular focus on item response theory (IRT) models. Graphics processing unit (GPU) support is available for most computations.

Description

Latent factor models reduce the dimensionality of data by converting a large number of discrete or continuous observed variables (called items) into a smaller number of continuous unobserved variables (called latent factors), potentially making the data easier to understand. Latent factor models for discrete items are called item response theory (IRT) models.

Traditional maximum likelihood (ML) estimation methods for IRT models are computationally intensive when the sample size, the number of items, and the number of latent factors are all large. This issue can be avoided by approximating the ML estimator using an importance-weighted amortized variational estimator (I-WAVE) from the field of deep learning (for details, see Urban and Bauer, 2021). As an estimation byproduct, I-WAVE allows researchers to compute approximate factor scores and log-likelihoods for any observation — even new observations that were not used for model fitting.

DeepIRTools' main functionality is the stand-alone IWAVE class contained in the iwave module. This class includes fit(), scores(), and log_likelihood() methods for fitting a latent factor model and for computing approximate factor scores and log-likelihoods for the fitted model.

The following (multidimensional) latent factor models are currently available...

  • ...for binary and ordinal items:
    • Graded response model
    • Generalized partial credit model
  • ...for continuous items:
    • Normal (linear) factor model
    • Lognormal factor model
  • ...for count data:
    • Poisson factor model
    • Negative binomial factor model

DeepIRTools supports mixing item types, handling missing completely at random data, and predicting the mean of the latent factors with covariates (i.e., latent regression modeling); all models are estimable in both confirmatory and exploratory contexts. In the confirmatory context, constraints on the factor loadings, intercepts, and factor covariance matrix are implemented by providing appropriate arguments to fit(). In the exploratory context, the screeplot() function in the figures module may help identify the number of latent factors underlying the data.

Requirements

  • Python 3.7 or higher
  • torch
  • pyro-ppl
  • numpy

Installation

To install the latest version:

pip install deepirtools

Documentation

Official documentation is available here.

Examples

Tutorial

examples/big_5_tutorial.ipynb gives a tutorial on using DeepIRTools to fit several kinds of latent factor models using large-scale data.

Quick Example

In [1]: import deepirtools
   ...: from deepirtools import IWAVE
   ...: import torch

In [2]: deepirtools.manual_seed(123)

In [3]: data = deepirtools.load_grm()["data"]

In [4]: n_items = data.shape[1]

In [5]: model = IWAVE(
   ...:       model_type = "grm",
   ...:       latent_size = 4,
   ...:       n_cats = [3] * n_items,
   ...:       Q = torch.block_diag(*[torch.ones([3, 1])] * 4),
   ...:       correlated_factors = [0, 1, 2, 3],
   ...: )

Initializing model parameters
Initialization ended in  0.0  seconds

In [6]: model.fit(data, iw_samples = 5)

Fitting started
Epoch =     846 Iter. =  27101 Cur. loss =   11.15   Intervals no change = 100
Fitting ended in  95.14  seconds

In [7]: model.loadings # Loadings matrix.
Out[7]: 
tensor([[1.4004, 0.0000, 0.0000, 0.0000],
        [1.3816, 0.0000, 0.0000, 0.0000],
        [0.5557, 0.0000, 0.0000, 0.0000],
        [0.0000, 0.5833, 0.0000, 0.0000],
        [0.0000, 1.0996, 0.0000, 0.0000],
        [0.0000, 1.7175, 0.0000, 0.0000],
        [0.0000, 0.0000, 0.7294, 0.0000],
        [0.0000, 0.0000, 0.5775, 0.0000],
        [0.0000, 0.0000, 1.1082, 0.0000],
        [0.0000, 0.0000, 0.0000, 1.6827],
        [0.0000, 0.0000, 0.0000, 0.7021],
        [0.0000, 0.0000, 0.0000, 0.6706]])

In [8]: model.intercepts # Category intercepts.
Out[8]: 
tensor([[-1.2907,  1.4794],
        [-0.6921,  1.2275],
        [-0.4097,  0.3086],
        [-2.0435,  1.3194],
        [-2.8560,  1.0286],
        [-0.2557,  1.9871],
        [-1.6538,  0.6874],
        [-0.4569,  0.8666],
        [-1.2310,  1.7704],
        [-1.1810,  0.2015],
        [-0.6825,  2.5192],
        [-2.8031,  2.7023]])

In [9]: model.cov # Factor covariance matrix.
Out[9]: 
tensor([[1.0000, 0.1679, 0.1489, 0.2227],
        [0.1679, 1.0000, 0.1406, 0.2248],
        [0.1489, 0.1406, 1.0000, 0.1452],
        [0.2227, 0.2248, 0.1452, 1.0000]])
        
In [10]: model.log_likelihood(data) # Approximate log-likelihood.

Computing approx. LL
Approx. LL computed in 3.81 seconds
Out[10]: -11352.973602294922

In [11]: model.scores(data) # Approximate factor scores.
Out[11]: 
tensor([[-0.6504, -0.1423,  0.7591, -1.7465],
        [ 0.7054, -1.0571, -0.0198, -2.4142],
        [ 0.4145, -0.7144,  1.2089,  0.6287],
        ...,
        [-0.3914,  1.4080,  0.1451,  0.2159],
        [ 1.7497,  0.0664, -1.8161, -0.8235],
        [ 0.6082, -0.2060, -0.1357,  0.7942]])

Citation

To cite DeepIRTools in publications, use:

To cite the method, use:

and/or:

  • Urban, C. J. (2021). Machine learning-based estimation and goodness-of-fit for large-scale confirmatory item factor analysis (Publication No. 28772217) [Master's thesis, University of North Carolina at Chapel Hill]. ProQuest Dissertations Publishing. https://www.proquest.com/docview/2618877227/21C6C467D6194C1DPQ/

BibTeX entries for LaTeX users are:

@Manual{DeepIRTools,
title = {{D}eep{IRT}ools: {D}eep learning-based estimation and inference for item response theory models},
     author = {Urban, Christopher J. and He, Shara},
     year = {2022},
     note = {Python package},
     url = {https://github.com/cjurban/deepirtools},
}
@article{UrbanBauer2021,
    author = {Urban, Christopher J. and Bauer, Daniel J.},
    year={2021},
    title={{A} deep learning algorithm for high-dimensional exploratory item factor analysis},
    journal = {Psychometrika},
    volume = {86},
    number = {1},
    pages = {1--29}
}
@phdthesis{Urban2021,
    author  = {Urban, Christopher J.},
    title   = {{M}achine learning-based estimation and goodness-of-fit for large-scale confirmatory item factor analysis},
    publisher = {ProQuest Dissertations Publishing},
    school  = {University of North Carolina at Chapel Hill},
    year    = {2021},
    type    = {Master's thesis},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepirtools-0.1.1.tar.gz (89.0 kB view details)

Uploaded Source

Built Distribution

deepirtools-0.1.1-py3-none-any.whl (88.1 kB view details)

Uploaded Python 3

File details

Details for the file deepirtools-0.1.1.tar.gz.

File metadata

  • Download URL: deepirtools-0.1.1.tar.gz
  • Upload date:
  • Size: 89.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.7

File hashes

Hashes for deepirtools-0.1.1.tar.gz
Algorithm Hash digest
SHA256 fa90d153e42bb8b002b8e56543968784d3a7f78042666a34f13d41966cc9041c
MD5 62d40e6f58102d354e4cbdc028e36e62
BLAKE2b-256 6da5f3fa27503291feb81adecb00ec415caf65c8c392104f3121599a6747b82e

See more details on using hashes here.

File details

Details for the file deepirtools-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: deepirtools-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 88.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.7

File hashes

Hashes for deepirtools-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d9a9f865ea631154c4afcfbef39b7df677e80b37f98e8ebd59b1030c65076998
MD5 a1c824578e8d71c084f38a728034b255
BLAKE2b-256 2b46edb0efea39e7d77120b866452a9350e9a15a4d74234bb4d45398a30b0422

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page