Scikit-Learn like Named Entity Recognition modules
Project description
➡️ sequence-learn
With sequence-learn
, you can build models for named entity recognition as quickly as if you were building a sklearn classifier.
It takes as input embedded token lists, which you can create within a few lines of code using the embedders library. The labels are on token-level, i.e., for each token, you must provide some information in a simple list.
Installation
You can set up this library via either running $ pip install sequencelearn
, or via cloning this repository and running $ pip install -r requirements.txt
in your repository.
A sample installation including embedders
would be (including spaCy for tokenization):
$ conda create --name sequence-learn python=3.9
$ conda activate sequence-learn
$ pip install sequencelearn
$ pip install embedders
$ python -m spacy download en_core_web_sm
Usage
Once you have installed the package(s), you can easily create the input for a text corpus and put it - together with the required labels - into the model training.
from embedders.extraction.contextual import TransformerTokenEmbedder
from sequencelearn.sequence_tagger import CRFTagger
corpus = [
"I went to Cologne in 2009",
"My favorite number is 41",
# ...
]
labels = [
["OUTSIDE", "OUTSIDE", "OUTSIDE", "CITY", "OUTSIDE", "YEAR"],
["OUTSIDE", "OUTSIDE", "OUTSIDE", "OUTSIDE", "DIGIT"],
# ...
]
# use embedders to easily convert your raw data
embedder = TransformerTokenEmbedder("distilbert-base-uncased", "en_core_web_sm")
embeddings = embedder.fit_transform(corpus)
# contains a list of ragged shape [num_texts, num_tokens (text-specific), embedding_dimension]
tagger = CRFTagger()
tagger.fit(embeddings, labels)
Now that you've trained a tagger model, you can easily apply it to new text data.
sentence = ["My birthyear is 2002"]
print(tagger.predict(embedder.transform(sentence)))
# prints [['OUTSIDE', 'OUTSIDE', 'OUTSIDE', 'YEAR']]
Roadmap
- Add documentation to existing models
- Add sequence-based models (e.g. CRF-based)
- Add sample projects
- Enable models to be converted to bytes / stored to disk
- Add test cases
If you want to have something added, feel free to open an issue.
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
And please don't forget to leave a ⭐ if you like the work!
License
Distributed under the Apache 2.0 License. See LICENSE.txt for more information.
Contact
This library is developed and maintained by kern.ai. If you want to provide us with feedback or have some questions, don't hesitate to contact us. We're super happy to help ✌️
Acknowledgements
Huge thanks to Erik Ziegler for helping with the CRF implementation!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file sequencelearn-0.0.9-py2.py3-none-any.whl
.
File metadata
- Download URL: sequencelearn-0.0.9-py2.py3-none-any.whl
- Upload date:
- Size: 18.2 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f97e22172d454729f2f95da8a29062fe897580e03e9ad918898bc96108fca32b |
|
MD5 | 3a9396ab7855c8aa7281b93f0ae7f885 |
|
BLAKE2b-256 | a932656f8835746b2bf621fc1a0b15837989ac80f4c2e923412209891743021c |