A comprehensive multilingual Named Entity Recognition (NER) library leveraging BERT. Supports key information extraction tasks across various domains such as biomedical, environmental, and technological.
Project description
Named_Entity_Recognition_BERT_Multilingual_Library_LUX
Overview
Named_Entity_Recognition_BERT_Multilingual_Library_LUX is a powerful and flexible library for Named Entity Recognition (NER) tasks using BERT models. This library supports multilingual NER and is suitable for key information extraction tasks across various domains such as biomedical, environmental, and technological.
The library simplifies NER tasks by providing an easy-to-use pipeline for loading data, training models, and making predictions. It is designed for developers, researchers, and data scientists looking for a robust NER solution. You can find an example of employing the library at this Kaggle notebook.
Features
- Multilingual Support: Leverage the power of BERT for NER tasks in multiple languages.
- Flexible Input Format: Works with CoNLL format data.
- Easy Integration: Provides a simple
NERPipeline
class for seamless integration into your projects. - Comprehensive Metrics: Evaluate your models with precision, recall, F1-score, and accuracy.
- Pretrained Models: Supports any BERT-based pretrained models.
Installation
Install the library using pip
:
pip install named-entity-recognition-bert-multilingual-library-lux
Usage
Example Usage
Here is a complete example of how to use the library for training and predicting:
# Clone the dataset
!git clone https://github.com/spyysalo/bc2gm-corpus.git
from Named_Entity_Recognition_BERT_Multilingual_Library_LUX import NERPipeline
# Initialize pipeline
pipeline = NERPipeline(pretrained_model="bert-base-cased")
# Prepare data
train_dataset, val_dataset, test_dataset = pipeline.prepare_data(
"./bc2gm-corpus/conll/train.tsv",
"./bc2gm-corpus/conll/devel.tsv",
"./bc2gm-corpus/conll/test.tsv"
)
# Initialize model
pipeline.initialize_model(num_labels=len(pipeline.label_list))
# Train the model
pipeline.train(train_dataset, val_dataset)
# Test the model
test_metrics = pipeline.test(test_dataset)
print(test_metrics)
# Predict on a new sentence
predictions = pipeline.predict("BRCA1 is a gene associated with breast cancer.")
print("\nPredictions on New Sentence:")
for token, label in predictions:
print(f"{token} | {label}")
CoNLL Data Format
The input data should be in CoNLL format:
Token1 Label1
Token2 Label2
Token3 Label3
Token4 Label4
Example:
BRCA1 B-GENE
is O
a O
gene O
associated O
with O
breast B-DISEASE
cancer I-DISEASE
. O
Key Components
1. NERPipeline
The main class providing methods for:
- Data Preparation: Converts CoNLL format to a dataset suitable for BERT.
- Model Initialization: Loads a pretrained BERT model for NER.
- Training: Fine-tunes the model on your data.
- Prediction: Predicts labels for new sentences.
- Evaluation: Computes evaluation metrics (precision, recall, F1, etc.).
2. Helper Functions
The library also includes utility functions for advanced users:
load_data
: Load and parse CoNLL format data.convert_to_hf_format
: Convert data to Hugging Face dataset format.compute_metrics
: Evaluate predictions usingseqeval
.
Evaluation Metrics
The library uses the seqeval
library to compute the following metrics:
- Precision
- Recall
- F1-Score
- Accuracy
Dependencies
torch
transformers
datasets
evaluate
numpy
seqeval
Contributing
We welcome contributions! Please feel free to open issues or submit pull requests.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file named_entity_recognition_bert_multilingual_library_lux-0.1.5.tar.gz
.
File metadata
- Download URL: named_entity_recognition_bert_multilingual_library_lux-0.1.5.tar.gz
- Upload date:
- Size: 6.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d3338948603bcc86c5d3d98c3b6c9ca24e21f90797482bf95a9bf64cf48a8842 |
|
MD5 | de8cdf96ca6a2e4a4942c9d22bae7ec3 |
|
BLAKE2b-256 | aa85178fd74011a46d39b89a40186663a86fea01f985ee93736e071321f23972 |
File details
Details for the file Named_Entity_Recognition_BERT_Multilingual_Library_LUX-0.1.5-py3-none-any.whl
.
File metadata
- Download URL: Named_Entity_Recognition_BERT_Multilingual_Library_LUX-0.1.5-py3-none-any.whl
- Upload date:
- Size: 7.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f675548bb1d5ff631e0c89fc329aaadabe342f3079bb0beb20c0d70ca601a39b |
|
MD5 | 13d1a1a502e71bfdf257cf7ef39baa41 |
|
BLAKE2b-256 | 28cae0545aadfdc5c6bf9db5fbed51c8923064754e9e03ea2d7960b680d206bd |