Skip to main content

Download NLP4BIA benchmarks and load datasets in their format

Project description

NLP4BIA Library

PyPI version
License: MIT

This repository provides a Python library for loading, processing, and utilizing biomedical datasets curated by the NLP4BIA research group at the Barcelona Supercomputing Center (BSC). The datasets are specifically designed for natural language processing (NLP) tasks in the biomedical domain.


Installation

pip install nlp4bia

Introduction

NLP4BIA is a Python package for working with curated biomedical NLP datasets in Spanish. Developed by the NLP4BIA research group at the Barcelona Supercomputing Center (BSC), it provides:

  • Dataset Loaders for public benchmarks like Distemist, Meddoplace, Medprocner, Symptemist.
  • Preprocessing Utilities such as deduplication, PDF parsing, and more.
  • Linking Tools to perform dense retrieval against medical gazetteers (e.g., SNOMED CT) using SentenceTransformers.

Whether you’re training new NLP models on Spanish clinical text, sanitizing raw medical documents, or performing terminology linking, NLP4BIA aims to streamline your workflow.

Available Dataset Loaders

The library currently supports the following dataset loaders, which are part of public benchmarks:

1. Distemist

  • Description: A dataset for disease mentions recognition and normalization in Spanish medical texts.
  • Zenodo Repository: Distemist Zenodo

2. Meddoplace

  • Description: A dataset for place name recognition in Spanish medical texts.
  • Zenodo Repository: Meddoplace Zenodo

3. Medprocner

  • Description: A dataset for procedure name recognition in Spanish medical texts.
  • Zenodo Repository: Medprocner Zenodo

4. Symptemist

  • Description: A dataset for symptom mentions recognition in Spanish medical texts.
  • Zenodo Repository: Symptemist Zenodo

Dataset Columns

Column Name Type/Example Description
filenameid "12345_678" Unique ID combining filename and character offsets.
mention_class "ENFERMEDAD" Class of the mention (disease, symptom, procedure, etc.).
span "diabetes tipo 2" Text span corresponding to the mention.
code "44054006" Normalized SNOMED CT code for the mention.
sem_rel "EXACT"/"NARROW"/"COMPOSITE" EXACT: The mention matches perfectly with the associated term; NARROW: it is not exactly the same but a term parent not in the ontology; COMPOSITE: needs more than one code to be defined (e.g. 1243535+13452543)
is_abbreviation True / False Whether the mention is an abbreviation.
is_composite True / False Whether the mention is a composite term.
needs_context True / False Whether extra context is required to interpret the span.
extension_esp "info adicional" Extra fields specific to Spanish texts.

Gazetteer Columns

Column Name Type/Example Description
code "44054006" SNOMED CT code for the term.
language "es" Language of the term (e.g., "es", "en").
term "diabetes" The term itself (string).
semantic_tag "disorder" Semantic tag associated with the term.
mainterm True / False Whether this is a primary (“preferred”) term or a synonym.

Quick Start Guide

Example Usage

Dataset Loaders

Here's how to use one of the dataset loaders, such as DistemistLoader:

from nlp4bia.datasets.benchmark.distemist import DistemistLoader

# Initialize loader
distemist_loader = DistemistLoader(lang="es", download_if_missing=True)

# Load and preprocess data
dis_df = distemist_loader.df
print(dis_df.head())

Dataset folders are automatically downloaded and extracted to the ~/.nlp4bia directory.

Preprocessor

Deduplication
from nlp4bia.preprocessor.deduplicator import HashDeduplicator

# Define the list of files to deduplicate
ls_files = ["path/to/file1.txt", "path/to/file2.txt"]

# Instantiate the deduplicator. It deduplicates the files using 8 cores.
hd = HashDeduplicator(ls_files, num_processes=8)

# Deduplicate the files and save the results to a CSV file
hd.get_deduplicated_files("path/tp/deduplicated_contents.csv")
Document Parser

PDFS

from nlp4bia.preprocessor.pdfparser import PDFParserMuPDF

# Define the path to the PDF file
pdf_path = "path/to/file.pdf"

# Instantiate the PDF parser
pdf_parser = PDFParserMuPDF(pdf_path)

# Extract the text from the PDF file
pdf_text = pdf_parser.extract_text()

Linking

Perform dense retrieval using the DenseRetriever class:

from sentence_transformers import SentenceTransformer
from nlp4bia.datasets.benchmark.medprocner import MedprocnerLoader, MedprocnerGazetteer
from nlp4bia.linking.retrievers import DenseRetriever

# Load the dataset and gazetteer
df_proc = MedprocnerLoader().df
gaz_proc = MedprocnerGazetteer().df
gaz_proc = gaz_proc.sort_values(by=["code", "mainterm"], 
                                ascending=[True, False]) # Make sure mainterms are first

# Load the model
model_name = "path/to/model"
st_model = SentenceTransformer(model_name)

# Create the vector database
vector_db = st_model.encode(gaz_proc["term"].tolist()[:100], 
                            show_progress_bar=True, 
                            convert_to_tensor=True, 
                            normalize_embeddings=True)

# Initialize the retriever
biencoder = DenseRetriever(vector_db=vector_db, model=st_model)
biencoder.retrieve_top_k(["reparación de un desprendimiento de la retina"], 
                          gaz_proc.iloc[:100], 
                          k=10, 
                          input_format="text")

Perform full Bien/Cross-Encoder linking using the BECELinker class: (documentation available in nlp4bia/docs/BECELinker.md)

import pandas as pd
from sentence_transformers import SentenceTransformer, CrossEncoder
from nlp4bia.linking.BECELinker import BECELinker

# 1) Load your gazetteer as a DataFrame with "term" and "code" columns:
# gaz_proc = pd.read_csv("medproc_gazetteer.csv")  # must have columns ["term","code"]

# You can also use one of the preprocessed gazetteers from nlp4bia:
from nlp4bia.datasets.benchmark.medprocner import MedprocnerLoader, MedprocnerGazetteer
gaz_proc = MedprocnerGazetteer().df

# 2) Prepare or load your bi-encoder & cross-encoder:
biencoder_path = "/path/to/bi_encoder_checkpoint"  # or a preloaded SentenceTransformer object (e.g. ICB-UMA/ClinLinker-KB-GP)
crossencoder_path = "/path/to/cross_encoder_checkpoint"  # or a preloaded CrossEncoder object

# 3) Instantiate BECELinker:
linker = BECELinker(
    df_gazetteer=gaz_proc,
    biencoder_model_or_path=biencoder_path,           # can be a model instance or a path string
    crossencoder_model_or_path=crossencoder_path,     # can be a model instance or path string
    normalize_embeddings=True,
    show_progress_bar=True
)

# 4) Prepare a list of mention strings you want to link:
mentions = ["exploración neurológica", "incisión cutánea", "vacuna"]

# 5) Link a list of mentions
ls_mentions = df_proc["span"].tolist()[:10]
results = linker.link(
    mentions=ls_mentions,
    n_candidates=200,
    top_k=5,
    return_documents=True
)

# 6) Inspect results for "aspirin":
res0 = results[0]
print("Mention:", res0["mention"])
for rank, (term, code, score) in enumerate(zip(res0["terms"], res0["codes"], res0["similarity"]), start=1):
    print(f" {rank:02d}. {term} (ID: {code}) → score: {score:.4f}")

Contributing

Contributions to expand the dataset loaders or improve existing functionality are welcome! Please open an issue or submit a pull request.


License

This project is licensed under the MIT License. See the LICENSE file for details.


References

If you use this library or its datasets in your research, please cite the corresponding Zenodo repositories or related publications.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nlp4bia-2.4.2.tar.gz (33.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nlp4bia-2.4.2-py3-none-any.whl (42.3 kB view details)

Uploaded Python 3

File details

Details for the file nlp4bia-2.4.2.tar.gz.

File metadata

  • Download URL: nlp4bia-2.4.2.tar.gz
  • Upload date:
  • Size: 33.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.3

File hashes

Hashes for nlp4bia-2.4.2.tar.gz
Algorithm Hash digest
SHA256 133dca05c44d89dc01b4c0dbe53ba290ff4da7f7f5ba6b2de4dcf286fa551fba
MD5 da768b154b386d18fa2dc85edd7c9e08
BLAKE2b-256 20a37ed6cdd8b3f5978e89b9ac41056e0d8c9bfa18b6c49db55e85e0abbca7c0

See more details on using hashes here.

File details

Details for the file nlp4bia-2.4.2-py3-none-any.whl.

File metadata

  • Download URL: nlp4bia-2.4.2-py3-none-any.whl
  • Upload date:
  • Size: 42.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.3

File hashes

Hashes for nlp4bia-2.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 768fcfec1589666f925fadd1bc6ec9e067ad5fbd295a5e5d36676519ac1951e7
MD5 e546aed908db56580d735d492f1af523
BLAKE2b-256 297c6ac1fe5faba9d4109e0c6423c1ed3b1a9a55cafa1890a3ecd76868e79f7a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page