Toolkit for Neuron Analysis in Deep NLP Models
Project description
NeuroX Toolkit
NeuroX provide all the necessary tooling to perform Interpretation and Analysis of (Deep) Neural Networks centered around Probing. Specifically, the toolkit provides:
- Support for extraction of activation from popular models including the entirety of transformers, with extended support for other models like OpenNMT-py planned in the near future
- Support for training linear probes on top of these activations, on the entire activation space of a model, on specific layers, or even on specific set of neurons.
- Support for neuron extraction related to specific concepts, using the Linear Correlation Analysis method (What is one Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models.). The toolkit can extract either a local ranking of neurons important to a particular target class, or a global ranking of neurons important to all the target classes.
- Support for ablation analysis by either removing or zeroing out specific neurons to determine their function and importance.
- Support for subword and character level aggregation across a variety of tokenizers, including BPE and all tokenizers in the transformers library.
- Support for activation visualization over regular text, to generate qualitative samples of neuron activity over particular sentences.
A demo using a lot of functionality provided by this toolkit is available.
Getting Started
This toolkit requires and is tested on Python versions 3.6 and above. It may work with older Python versions with some fiddling, but is currently not tested nor supported. The easiest way to get started is to use the published pip package:
pip install neurox
Manual Installation
If you wish to install this package manually (e.g. to modify or contribute to the code base), you can clone this repository into a directory of your choice:
git clone https://github.com/fdalvi/NeuroX.git
Add the directory to your python path. This can be done dynamically at runtime using the sys.path
list:
import sys
sys.path.append("path/to/cloned/NeuroX/")
A Conda environment is provided with all the necessary dependencies for the toolkit. The toolkit primarily relies on PyTorch and NumPy for most of its operations. To create a new environment with all the dependencies, run:
conda env create -f conda-environment.yml -n neurox-toolkit
conda activate neurox-toolkit
If you wish to manage your enviroment in other ways, a standard requirements.txt
is also provided for use by pip
directly.
Sample Code
A Jupyter notebook with a complete example of extracting activations from BERT, training a toy task, extracting neurons and visualizing them is available in the examples directory for a quick introduction to the main functionality provided by this toolkit.
Documentation
API Reference contains an API reference for all of the functions exposed by this toolkit. Primarily, the toolkit's functionality is separated into several high-level components:
- Extraction
- Data Preprocessing
- Linear Probing
- Neuron extraction and interpretation
- Neuron cluster analysis
- Visualization
Citation
Please cite our paper published at AAAI'19 if you use this toolkit.
@article{dalvi2019neurox,
title={NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks},
author={Dalvi, Fahim
and Nortonsmith, Avery
and Bau, D Anthony
and Belinkov, Yonatan
and Sajjad, Hassan
and Durrani, Nadir
and Glass, James},
journal={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2019}
}
Planned features
- Pip package
- Support for OpenNMT-py models
- Support for control tasks and computing metrics like selectivity
- Support for attention and other module analysis
Publications
- Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani (2021). Fine-grained Interpretation and Causation Analysis in Deep NLP Models. In Proceedings of the 18th Annual Conference of the North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL), Virtual, June
- Nadir Durrani, Hassan Sajjad, Fahim Dalvi (2021). How transfer learning impacts linguistic knowledge in deep NLP models? In Findings of the Association for Computational Linguistics (ACL-IJCNLP). Virtual, August
- Yonatan Belinkov*, Nadir Durrani*, Fahim Dalvi, Hassan Sajjad, Jim Glass (2020). On the Linguistic Representational Power of Neural Machine Translation Models. Computational Linguistics. 46(1), pages 1 to 57 (*Equal Contribution––Alphabetic Order).
- Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Yonatan Belinkov (2020). Analyzing Individual Neurons in Pre-trained Language Models. In Proceedings of the 17th Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, November.
- Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov (2020). Analyzing Redundancy in Pretrained Transformer Models. In Proceedings of the 17th Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, November.
- John M Wu*, Yonatan Belinkov*, Hassan Sajjad, Nadir Durrani, Fahim Dalvi and James Glass (2020). Similarity Analysis of Contextual Word Representation Models. In Proceedings of the 58th Annual Conference of the Association for Computational Linguistics (ACL). Seattle, USA, July (*Equal Contribution––Alphabetic Order).
- Anthony Bau*, Yonatan Belinkov*, Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and James Glass (2019). Identifying and Controlling Important Neurons in Neural Machine Translation. In Proceedings of the 7th International Conference on Learning Representations (ICLR). New Orleans, USA, May (*Equal Contribution––Alphabetic Order).
- Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, and Preslav Nakov (2019). One Size Does Not Fit All: Comparing NMT Representations of Different Granularities. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL), Minneapolis, US, June
- Fahim Dalvi*, Nadir Durrani*, Hassan Sajjad*, Yonatan Belinkov, D. Anthony Bau, and James Glass (2019). What is one Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI). Honolulu, USA, Jan. (*Equal Contribution––Alphabetic Order).
- Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass (2017). What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Conference of the Association for Computational Linguistics (ACL), Vancouver, Canada, July.
- Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov and Stephan Vogel (2017). Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder. In Proceedings of the 8th International Conference on Natural Language Processing (IJCNLP), Taipei, Taiwan, November.
- Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi and James Glass (2017). Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks. In Proceedings of the 8th International Conference on Natural Language Processing (IJCNLP), Taipei, Taiwan, November
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file neurox-1.0.9.tar.gz
.
File metadata
- Download URL: neurox-1.0.9.tar.gz
- Upload date:
- Size: 40.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.10.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2aaa8d576ec8af8ea524129d8a54164cfeba5b9c22542426f9ddd8062ac66206 |
|
MD5 | d9dd9292f8e6cf0f8b5373ba5bc0f627 |
|
BLAKE2b-256 | ce0c268de3fd27c872370e3bd862d684ca1d445989d72524055b6e4e95e91f35 |
File details
Details for the file neurox-1.0.9-py3-none-any.whl
.
File metadata
- Download URL: neurox-1.0.9-py3-none-any.whl
- Upload date:
- Size: 44.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.10.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d060ea6c7e3f7771b666940e682b0e1ae5a4491613f9eb9894641393beb76618 |
|
MD5 | 6f851958642262036b47c1a07feb5a6a |
|
BLAKE2b-256 | 889ef7cc0859f40ca5fe421bd36e7fe1318b2519c31c9d69e9cdd273d937a72e |