A package of useful functions to analyze transformer based language models.
Project description
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
This repo is a wrapper around the transformers
library from hugging face :hugs:
Installation
Install from Pypi using:
pip install minicons
Supported Functionality
- Extract word representations from Contextualized Word Embeddings
- Score sequences using language model scoring techniques, including masked language models following Salazar et al. (2020).
Examples
- Extract word representations from contextualized word embeddings:
from minicons import cwe
model = cwe.CWE('bert-base-uncased')
context_words = [("I went to the bank to withdraw money.", "bank"),
("i was at the bank of the river ganga!", "bank")]
print(model.extract_representation(context_words, layer = 12))
'''
tensor([[ 0.5399, -0.2461, -0.0968, ..., -0.4670, -0.5312, -0.0549],
[-0.8258, -0.4308, 0.2744, ..., -0.5987, -0.6984, 0.2087]],
grad_fn=<MeanBackward1>)
'''
- Compute sentence acceptability measures (surprisals) using Word Prediction Models:
from minicons import scorer
mlm_model = scorer.MaskedLMScorer('bert-base-uncased', 'cpu')
ilm_model = scorer.IncrementalLMScorer('distilgpt2', 'cpu')
s2s_model = scorer.Seq2SeqScorer('t5-base', 'cpu')
stimuli = ["The keys to the cabinet are on the table.",
"The keys to the cabinet is on the table."]
# use sequence_score with different reduction options:
# Sequence Surprisal - lambda x: -x.sum(0).item()
# Sequence Log-probability - lambda x: x.sum(0).item()
# Sequence Surprisal, normalized by number of tokens - lambda x: -x.mean(0).item()
# Sequence Log-probability, normalized by number of tokens - lambda x: x.mean(0).item()
# and so on...
print(ilm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))
'''
[39.879737854003906, 42.75846481323242]
'''
# MLM scoring, inspired by Salazar et al., 2020
print(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))
'''
[13.962685585021973, 23.415111541748047]
'''
# Seq2seq scoring
## Blank source sequence, target sequence specified in `stimuli`
print(s2s_model.sequence_score(stimuli, source_format = 'blank')
## Source sequence is the same as the target sequence in `stimuli`
print(s2s_model.sequence_score(stimuli, source_format = 'copy')
'''
[-7.910910129547119, -7.835635185241699]
[-10.555519104003906, -9.532546997070312]
'''
Tutorials
- Introduction to using LM-scoring methods using minicons
- Computing sentence and token surprisals using minicons
- Extracting word/phrase representations using minicons
Recent Updates
- November 6, 2021: MLM scoring has been fixed! You can now use
model.token_score()
andmodel.sequence_score()
withMaskedLMScorers
as well! - June 4, 2022: Added support for Seq2seq models. Thanks to Aaron Mueller 🥳
Citation
If you use minicons
, please cite the following paper:
@article{misra2022minicons,
title={minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models},
author={Kanishka Misra},
journal={arXiv preprint arXiv:2203.13112},
year={2022}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
minicons-0.2.7.tar.gz
(20.5 kB
view hashes)
Built Distribution
minicons-0.2.7-py3-none-any.whl
(21.3 kB
view hashes)