Skip to main content

BERT token level embedding with MxNet

Project description

Bert Embeddings

Build Status PyPI version Documentation Status

BERT, published by Google, is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.

The goal of this project is to obtain the token embedding from BERT's pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing or token embedding.

This project is implemented with @MXNet. Special thanks to @gluon-nlp team.

Install

pip install bert-embedding
# If you want to run on GPU machine, please install `mxnet-cu92`.
pip install mxnet-cu92

Usage

from bert_embedding import BertEmbedding

bert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
 Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
 As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. 
BERT is conceptually simple and empirically powerful. 
It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""
sentences = bert_abstract.split('\n')
bert_embedding = BertEmbedding()
result = bert_embedding(sentences)

If you want to use GPU, please import mxnet and set context

import mxnet as mx
from bert_embedding import BertEmbedding

...

ctx = mx.gpu(0)
bert = BertEmbedding(ctx=ctx)

This result is a list of a tuple containing (tokens, tokens embedding)

For example:

first_sentence = result[0]

first_sentence[0]
# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']
len(first_sentence[0])
# 18


len(first_sentence[1])
# 18
first_token_in_first_sentence = first_sentence[1]
first_token_in_first_sentence[1]
# array([ 0.4805648 ,  0.18369392, -0.28554988, ..., -0.01961522,
#        1.0207764 , -0.67167974], dtype=float32)
first_token_in_first_sentence[1].shape
# (768,)

OOV

There are three ways to handle oov, avg (default), sum, and last. This can be specified in encoding.

...
bert_embedding = BertEmbedding()
bert_embedding(sentences, 'sum')
...

Available pre-trained BERT models

book_corpus_wiki_en_uncased book_corpus_wiki_en_cased wiki_multilingual wiki_multilingual_cased wiki_cn
bert_12_768_12
bert_24_1024_16 x x x x

Example of using the large pre-trained BERT model from Google

from bert_embedding import BertEmbedding

bert_embedding = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')

Source: gluonnlp

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bert_embedding-1.0.0.dev1553008814.tar.gz (7.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bert_embedding-1.0.0.dev1553008814-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file bert_embedding-1.0.0.dev1553008814.tar.gz.

File metadata

  • Download URL: bert_embedding-1.0.0.dev1553008814.tar.gz
  • Upload date:
  • Size: 7.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.5.6

File hashes

Hashes for bert_embedding-1.0.0.dev1553008814.tar.gz
Algorithm Hash digest
SHA256 8f3e8d5324af29bc0b3864dc66336cdb1bb823222c853ea1d9e5a82d2c67c5e9
MD5 d0f05d498669bf615a8604bfb48c6ec4
BLAKE2b-256 64a37b75997c5ec2182b863232dfd1f02ef2728003c5b4c9f101bd573aaaf8db

See more details on using hashes here.

File details

Details for the file bert_embedding-1.0.0.dev1553008814-py3-none-any.whl.

File metadata

  • Download URL: bert_embedding-1.0.0.dev1553008814-py3-none-any.whl
  • Upload date:
  • Size: 12.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.5.6

File hashes

Hashes for bert_embedding-1.0.0.dev1553008814-py3-none-any.whl
Algorithm Hash digest
SHA256 03c36edc7cf4a3f73f8c089b8ebf336d53459e43d995458e06fd9e90ab24757b
MD5 eed583977ed17b0ea2302452878f7b4b
BLAKE2b-256 c7bfdebc8752d923aefe564ea3fe23eb6ab1e5e7c5ebf881e924fd32176c17ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page