A library of scikit compatible text transformers, that are ready to be integrated in an NLP pipeline for various classification tasks.
Project description
NLPkit - Transformers for text classification
A library of scikit compatible text transformers, that are ready to be integrated in an NLP pipeline for various classification tasks.
Project structure
.
├── nlpkit
│ ├── __init__.py
│ └── nlp_feature_extraction
│ ├── __init__.py
│ ├── liwc
│ ├── word_embeddings_features.py
│ ├── syntax_features.py
│ ├── ner_features.py
│ ├── pos_features.py
│ ├── liwc_features.py
│ ├── text_statistics_features.py
│ └── tests
│ ├── __init__.py
│ ├── test_data
│ ├── test_liwc_feature_extraction.py
│ ├── test_ner_feature_extraction.py
│ ├── test_pos_feature_extraction.py
│ ├── test_syntax_feature_extraction.py
│ └── test_word_embeddings_feature_extraction.py
├── examples
├── README.md
├── LICENCE
├── requirements.txt
└── setup.py
Getting Started
These instructions will get you a copy of the project up and running on your local machine.
Prerequisites
- Python 3.6
- Stanford CoreNLP Server (for some transformers)
Stanford CoreNLP Server with Docker
Stanford CoreNLP is required for constituency parsing, POS and NER tagging.
The easiest way is to have a CoreNLP Server running is to use Docker. You can find a Dockerfile and instructions to have the server running at Stanford CoreNLP Server - Docker.
List of transformers
- POSTagPreprocessor: Pre-processes text documents by tagging each word in the form of word_TAG_ e.g. what_WP. Can be used to generate POS tagged n-grams
- NERPreprocessor: Pre-processes text documents by replacing named entities with generic tags e.g. PERSON, LOCATION
- WordEmbedsDocVectorizer: Converts text documents to word2vec based document vector representations. It maps the words of a document to word2vec vectors, and averages them across dimensions to produce a document vector representation
- POSExtractor: Extracts Parts of Speech (POS) counts for a collection of text documents
- CFGExtractor: Extracts the Context Free Grammar (CFG) production rules found in a collection of text documents
- NamedEntitiesCounter: Extracts Named Entity counts per entity type (e.g. PERSON) for a collection of text documents
- LIWCExtractor: Extracts proportions of words that fall in the various LIWC categories for a collection of text documents
- TextStatsExtractor: Calculates various text statistics and readability scores for a collection of text documents
Usage
All the custom transformers extend the BaseEstimator and TransformerMixin and implement the fit and transform methods.
# POSExtractor
sf_parser = CoreNLPParser(url='http://localhost:9000/', tagtype='pos')
pos_extractor = POSExtractor(sf_parser)
X = pos_extractor.fit_transform(corpus)
They can also be used in pipelines e.g.
pipeline = Pipeline([
('pre', TextPreprocessor(stemming=False)),
('w2v', WordEmbedsDocVectorizer(self._word2vec, tfidf_weights=True)),
('clf', SVC(kernel='linear', C=1, probability=True))
])
For more, you can run the examples included in the examples folder.
Tests
The Pytest framework is used for unit testing. All of the custom text transformers produced in this project come with an extensive set of unit tests.
To run the tests use:
pytest src
Project repository
https://github.com/evanll/nlpkit-ml
Author
Written by Evan Lalopoulos evan.lalopoulos.2017@my.bristol.ac.uk as part of his thesis in Fake News detection using NLP.
Evan Lalopoulos - evanll
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for nlpkit_ml-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4e3a205ae78d7e5670fa28a6ec86734fb45ba17c67f361b9653257fa41b75e61 |
|
MD5 | 8289d9f02cfdc56c33f7f90981ba2c7e |
|
BLAKE2b-256 | bfca25e7d00c385fb4768e327b3fe6930fdf8fcfd843e459e40aae8441acab35 |