Skip to main content

Extension of text_explainability for sensitivity testing (robustness, fairness)

Project description

T_xt Sensitivity logo

Sensitivity testing (fairness & robustness) for text machine learning models

PyPI Downloads Python_version Build_passing License Documentation Status Code style: black

Extension of text_explainability

Uses the generic architecture of text_explainability to also include tests of safety (how safe it the model in production, i.e. types of inputs it can handle), robustness (how generalizable the model is in production, e.g. stability when adding typos, or the effect of adding random unrelated data) and fairness (if equal individuals are treated equally by the model, e.g. subgroup fairness on sex and nationality).

© Marcel Robeer, 2021

Quick tour

Safety: test if your model is able to handle different data types.

from text_sensitivity import RandomAscii, RandomEmojis, combine_generators

# Generate 10 strings with random ASCII characters

# Generate 5 strings with random ASCII characters and emojis
combine_generators(RandomAscii(), RandomEmojis()).generate_list(n=5)

Robustness: if your model performs equally for different entities ...

from text_sensitivity import RandomAddress, RandomEmail

# Random address of your current locale (default = 'nl')
RandomAddress(sep=', ').generate_list(n=5)

# Random e-mail addresses in Spanish ('es') and Portuguese ('pt'), and include from which country the e-mail is
RandomEmail(languages=['es', 'pt']).generate_list(n=10, attributes=True)

... and if it is robust under simple perturbations.

from text_sensitivity import compare_accuracy
from text_sensitivity.perturbation import to_upper, add_typos

# Is model accuracy equal when we change all sentences to uppercase?
compare_accuracy(env, model, to_upper)

# Is model accuracy equal when we add typos in words?
compare_accuracy(env, model, add_typos)

Fairness: see if performance is equal among subgroups.

from text_sensitivity import RandomName

# Generate random Dutch ('nl') and Russian ('ru') names, both 'male' and 'female' (+ return attributes)
RandomName(languages=['nl', 'ru'], sex=['male', 'female']).generate_list(n=10, attributes=True)


See the installation instructions for an extended installation guide.

Method Instructions
pip Install from PyPI via pip3 install text_sensitivity.
Local Clone this repository and install via pip3 install -e . or locally run python3 install.


Full documentation of the latest version is provided at

Example usage

See to see an example of how the package can be used, or run the lines in to do explore it interactively.


text_sensitivity is officially released through PyPI.

See for a full overview of the changes for each version.


  title = {Python package text\_sensitivity},
  author = {Marcel Robeer},
  howpublished = {\url{}},
  year = {2021}




Tasks yet to be done:

  • Word-level perturbations
  • Add fairness-specific metrics:
    • Counterfactual fairness
  • Add expected behavior
    • Robustness: equal to prior prediction, or in some cases might expect that it deviates
    • Fairness: may deviate from original prediction
  • Tests
    • Add tests for perturbations
    • Add tests for sensitivity testing schemes
  • Add visualization ability


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text_sensitivity-0.3.3.tar.gz (105.4 kB view hashes)

Uploaded Source

Built Distribution

text_sensitivity-0.3.3-py3-none-any.whl (56.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page