A Python package with ready-to-use models for various NLP tasks and text preprocessing utilities. The implementation allows fine-tuning.
Project description
MIM NLP
With this package you can easily use pre-trained models and fine-tune them, as well as create and train your own neural networks.
Below, we list NLP tasks and models that are available:
- Classification
- Neural Network
- SVM
- Regression
- Neural Network
- Seq2Seq
- Summarization (Neural Network)
It comes with utilities for text pre-processing such as:
- Text cleaning
- Lemmatization
- Deduplication
Installation
We recommend installing with pip.
pip install mim-nlp
The package comes with the following extras (optional dependencies for given modules):
svm
- simple svm model for classificationclassifier
- classification models: svm, neural networksregressor
- regression modelspreprocessing
- cleaning text, lemmatization and deduplicationseq2seq
-Seq2Seq
andSummarizer
models
Usage
Examples can be found in the notebooks directory.
Model classes
classifier.nn.NNClassifier
- Neural Network Classifierclassifier.svm.SVMClassifier
- Support Vector Machine Classifierclassifier.svm.SVMClassifierWithFeatureSelection
-SVMClassifier
with additional feature selection stepregressor.AutoRegressor
- regressor based on transformers' Auto Classesregressor.NNRegressor
- Neural Network Regressorseq2seq.AutoSummarizer
- summarizer based on transformers' Auto Classes
Interface
All the model classes have common interface:
fit
predict
save
load
and specific additional methods.
Text pre-processing
preprocessing.TextCleaner
- define a pipeline for text cleaning, supports concurrent processesingpreprocessing.lemmatize
- lemmatize text in Polish with Morfeuszpreprocessing.Deduplicator
- find near-duplicate texts (depending onthreshold
) with Jaccard index for n-grams
Development
Remember to use a separate environment for each project. Run commands below inside the project's environment.
Dependencies
We use poetry
for dependency management.
If you have never used it, consult
poetry documentation
for installation guidelines and basic usage instructions.
poetry install --with dev
To fix the Failed to unlock the collection!
error or stuck packages installation, execute the below command:
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
Git hooks
We use pre-commit
for git hook management.
If you have never used it, consult
pre-commit documentation
for installation guidelines and basic usage instructions.
pre-commit install
There are two hooks available:
- isort – runs
isort
for both.py
files and notebooks. Fails if any changes are made, so you have to rungit add
andgit commit
once again. - Strip notebooks – produces stripped versions of notebooks in
stripped
directory.
Tests
pytest
Linting
We use isort
and flake8
along with nbqa
to ensure code quality.
The appropriate options are set in configuration files.
You can run them with:
isort .
nbqa isort notebooks
and
flake8 .
nbqa flake8 notebooks --nbqa-shell
Code formatting
You can run black to format code (including notebooks):
black .
New version release
In order to add the next version of the package to PyPI, do the following steps:
- First, increment the package version in
pyproject.toml
. - Then build the new version: run
poetry build
in the root directory. - Finally, upload to PyPI:
poetry publish
(two newly created files).- If you get
Invalid or non-existent authentication information.
error, add PyPI token to poetry:poetry config pypi-token.pypi <my-token>
.
- If you get
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.