Skip to main content

Scikit-Learn like Named Entity Recognition modules

Project description

sequence-learn Python 3.9 pypi 0.0.9

➡️ sequence-learn

With sequence-learn, you can build models for named entity recognition as quickly as if you were building a sklearn classifier.

It takes as input embedded token lists, which you can create within a few lines of code using the embedders library. The labels are on token-level, i.e., for each token, you must provide some information in a simple list.

Installation

You can set up this library via either running $ pip install sequencelearn, or via cloning this repository and running $ pip install -r requirements.txt in your repository.

A sample installation including embedders would be (including spaCy for tokenization):

$ conda create --name sequence-learn python=3.9
$ conda activate sequence-learn
$ pip install sequencelearn
$ pip install embedders
$ python -m spacy download en_core_web_sm

Usage

Once you have installed the package(s), you can easily create the input for a text corpus and put it - together with the required labels - into the model training.

from embedders.extraction.contextual import TransformerTokenEmbedder
from sequencelearn.sequence_tagger import CRFTagger

corpus = [
    "I went to Cologne in 2009",
    "My favorite number is 41",
    # ...
]

labels = [
    ["OUTSIDE", "OUTSIDE", "OUTSIDE", "CITY", "OUTSIDE", "YEAR"],
    ["OUTSIDE", "OUTSIDE", "OUTSIDE", "OUTSIDE", "DIGIT"],
    # ...
]

# use embedders to easily convert your raw data
embedder = TransformerTokenEmbedder("distilbert-base-uncased", "en_core_web_sm")

embeddings = embedder.fit_transform(corpus)
# contains a list of ragged shape [num_texts, num_tokens (text-specific), embedding_dimension]

tagger = CRFTagger()
tagger.fit(embeddings, labels)

Now that you've trained a tagger model, you can easily apply it to new text data.

sentence = ["My birthyear is 2002"]
print(tagger.predict(embedder.transform(sentence)))
# prints [['OUTSIDE', 'OUTSIDE', 'OUTSIDE', 'YEAR']]

Roadmap

  • Add documentation to existing models
  • Add sequence-based models (e.g. CRF-based)
  • Add sample projects
  • Enable models to be converted to bytes / stored to disk
  • Add test cases

If you want to have something added, feel free to open an issue.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

And please don't forget to leave a ⭐ if you like the work!

License

Distributed under the Apache 2.0 License. See LICENSE.txt for more information.

Contact

This library is developed and maintained by kern.ai. If you want to provide us with feedback or have some questions, don't hesitate to contact us. We're super happy to help ✌️

Acknowledgements

Huge thanks to Erik Ziegler for helping with the CRF implementation!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

sequencelearn-0.0.9-py2.py3-none-any.whl (18.2 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file sequencelearn-0.0.9-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for sequencelearn-0.0.9-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 f97e22172d454729f2f95da8a29062fe897580e03e9ad918898bc96108fca32b
MD5 3a9396ab7855c8aa7281b93f0ae7f885
BLAKE2b-256 a932656f8835746b2bf621fc1a0b15837989ac80f4c2e923412209891743021c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page