Skip to main content

Embeddings from BioBERT

Project description

BioBert Embeddings

Token and sentence level embeddings from BioBERT model (Biomedical Domain).

BERT, published by Google, is conceptually simple and empirically powerful as it obtained state-of-the-art results on eleven natural language processing tasks.

The objective of this project is to obtain the word or sentence embeddings from BioBERT, pre-trained model by DMIS-lab. BioBERT, which is a BERT language model further trained on PubMed articles for adapting biomedical domain.

Instead of building and do fine-tuning for an end-to-end NLP model, You can directly utilize word embeddings from Biomedical BERT to build NLP models for various downstream tasks eg. Biomedical text classification, Text clustering, Extractive summarization or Entity extraction etc.

Features

  • Creates an abstraction to remove dealing with inferencing pre-trained BioBERT model.
  • Require only two lines of code to get sentence/token-level encoding for a text sentence.
  • The package takes care of OOVs (out of vocabulary) inherently.
  • Downloads and installs BioBERT pre-trained model (first initialization, usage in next section).

Install

pip install biobert-embedding==0.1.1

Example

word embeddings generated are list of 768 dimensional embeddings for each word.
sentence embedding generated is 768 dimensional embedding which is average of each token.

from biobert_embedding.embedding import BiobertEmbedding

text = "Breast cancers with HER2 amplification have a higher risk of CNS metastasis and poorer prognosis."\

# Class Initialization (You can set default 'model_path=None' as your finetuned BERT model path while Initialization)
biobert = BiobertEmbedding()

word_embeddings = biobert.word_vector(text)
sentence_embedding = biobert.sentence_vector(text)

print("Text Tokens: ", biobert.tokens)
# Text Tokens:  ['breast', 'cancers', 'with', 'her2', 'amplification', 'have', 'a', 'higher', 'risk', 'of', 'cns', 'metastasis', 'and', 'poorer', 'prognosis', '.']

print ('Shape of Word Embeddings: %d x %d' % (len(word_embeddings), len(word_embeddings[0])))
# Shape of Word Embeddings: 16 x 768

print("Shape of Sentence Embedding = ",len(sentence_embedding))
# Shape of Sentence Embedding =  768

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biobert-embedding-0.1.2.tar.gz (4.8 kB view details)

Uploaded Source

File details

Details for the file biobert-embedding-0.1.2.tar.gz.

File metadata

  • Download URL: biobert-embedding-0.1.2.tar.gz
  • Upload date:
  • Size: 4.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.5

File hashes

Hashes for biobert-embedding-0.1.2.tar.gz
Algorithm Hash digest
SHA256 c38e44eea552945277afdad59403a15410f97a5ea32fdd300b9e90ff2a095e1f
MD5 5ba27d1d39460eabc3dd59cb3186b50b
BLAKE2b-256 d2f0f5bd3fd4a0bcef4d85e5e82347ae73d376d68dc8086afde75838ba0473a2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page