Pollutes documents with terms biased on specific geners
Project description
Document Polluter
Overview
Document Polluter replaces gendered words in documents to create test data for machine learning models in order to identify bias.
Installation
document-polluter
is available on PyPI
http://pypi.python.org/pypi/document-polluter
Install via pip
$ pip install document-polluter
Install via easy_install
$ easy_install document-polluter
Install from repo
git repo <https://github.com/gregology/document-polluter>
$ git clone --recursive git://github.com/gregology/document-polluter.git
$ cd document-polluter
$ python setup.py install
Basic usage
>>> from document_polluter import DocumentPolluter
>>> documents = ['she shouted', 'my son', 'the parent']
>>> dp = DocumentPolluter(documents=documents, genre='gender')
>>> print(dp.polluted_documents['female'])
['she shouted', 'my daughter', 'the mother']
Running Test
$ python document_polluter/tests.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for document_polluter-0.0.2-py3.8.egg
Algorithm | Hash digest | |
---|---|---|
SHA256 | f89d576cd828b6d169951f0db936ec44b971fc0150b2f3bf1ca9743121126c16 |
|
MD5 | bb03a59faff1bc3b88ddabe96207e711 |
|
BLAKE2b-256 | 4787378fb26f33348f47bada8791ffc1e1ae7c8c8ed8ec519b9e131f311f0808 |