Skip to main content

BERT token level embedding with MxNet

Project description

Bert Embeddings

Build Status PyPI version Documentation Status

BERT, published by Google, is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.

The goal of this project is to obtain the sentence and token embedding from BERT's pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing the sentence or token embedding.

This project is implemented with @MXNet. Special thanks to @gluon-nlp team.

Install

pip install bert-embedding
pip install https://github.com/dmlc/gluon-nlp/tarball/master
# If you want to run on GPU machine, please install `mxnet-cu92`.
pip install mxnet-cu92

This project use API from gluonnlp==0.5.1, which hasn't been released yet. Once 0.5.1 is released, it is not necessary to install gluonnlp from source.

Usage

from bert_embedding import BertEmbedding

bert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
 Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
 As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. 
BERT is conceptually simple and empirically powerful. 
It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""
sentences = bert_abstract.split('\n')
bert = BertEmbedding()
result = bert.embedding(sentences)

If you want to use GPU, please import mxnet and set context

import mxnet as mx
from bert_embedding import BertEmbedding

...

ctx = mx.gpu(0)
bert = BertEmbedding(ctx=ctx)

This result is a list of a tuple containing (sentence embedding, tokens, tokens embedding)

For example:

first_sentence = result[0]

first_sentence[0]
# array([-0.835946  , -0.4605566 , -0.95620036, ..., -0.95608854,
#       -0.6258104 ,  0.7697007 ], dtype=float32)
first_sentence[0].shape
# (768,)

first_sentence[1]
# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']
len(first_sentence[1])
# 18


len(first_sentence[2])
# 18
first_token_in_first_sentence = first_sentence[2]
first_token_in_first_sentence[0]
# array([ 0.4805648 ,  0.18369392, -0.28554988, ..., -0.01961522,
#        1.0207764 , -0.67167974], dtype=float32)
first_token_in_first_sentence[0].shape
# (768,)

Available pre-trained BERT models

book_corpus_wiki_en_uncased book_corpus_wiki_en_cased wiki_multilingual
bert_12_768_12
bert_24_1024_16 x x

Example of using the large pre-trained BERT model from Google

from bert_embedding.bert import BertEmbedding

bert = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')

Source: gluonnlp

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bert_embedding-0.1.3.dev1550188558.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file bert_embedding-0.1.3.dev1550188558.tar.gz.

File metadata

  • Download URL: bert_embedding-0.1.3.dev1550188558.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.18.4 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.5.6

File hashes

Hashes for bert_embedding-0.1.3.dev1550188558.tar.gz
Algorithm Hash digest
SHA256 bf9755c1558791dcda00622032371f8221395232d31c63844985c7b6c4c5b88d
MD5 10e5794c74229a1693b026cf50d3dd0d
BLAKE2b-256 78cf02c018f9fe0e5129d12c471b21b84e3d84b5881b7af6c687a6671cd2b240

See more details on using hashes here.

Provenance

File details

Details for the file bert_embedding-0.1.3.dev1550188558-py3-none-any.whl.

File metadata

  • Download URL: bert_embedding-0.1.3.dev1550188558-py3-none-any.whl
  • Upload date:
  • Size: 11.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.18.4 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.5.6

File hashes

Hashes for bert_embedding-0.1.3.dev1550188558-py3-none-any.whl
Algorithm Hash digest
SHA256 d96727c7bd7cd119efeae84b86f945343db17e1f5e451319350aef9d6d3c416c
MD5 928c2df39d47ec217e2f5402f350cd36
BLAKE2b-256 8d0d92e55ba1c792490aaca92f89d1369ddfda677c7de7c83216d4ac6547c761

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page