Skip to main content

A Toolkit for Pre-trained Sequence Labeling Models Inference

Project description

# LightNER

[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI version](https://badge.fury.io/py/LightNER.svg)](https://badge.fury.io/py/LightNER)
<!-- [![Documentation Status](https://readthedocs.org/projects/tensorboard-wrapper/badge/?version=latest)](http://tensorboard-wrapper.readthedocs.io/en/latest/?badge=latest) -->
<!-- [![Downloads](https://pepy.tech/badge/torch-scope)](https://pepy.tech/project/LightNER) -->

A Toolkit to conduct inference with models pre-trained by LD-Net / AutoNER / VanillaNER / ...

We are in an early-release beta. Expect some adventures and rough edges.

## Quick Links

- [Installation](#installation)
- [Usage](#usage)

## Installation

To install via pypi:
```
pip install lightner
```

To build from source:
```
pip install git+https://github.com/LiyuanLucasLiu/LightNER
```
or
```
git clone https://github.com/LiyuanLucasLiu/LightNER.git
cd LightNER
python setup.py install
```

## Usage

### Pre-trained Models

| | NER | NP |
| ------------- |------------- | ------------- |
| LD-Net | [pner0.th](http://dmserv4.cs.illinois.edu/pner0.th) | [pnp0.th](http://dmserv4.cs.illinois.edu/pnp0.th) |
...

### Decode API

The decode api can be called in the following way:
```
from lightner import decoder_wrapper
model = decoder_wrapper()
model.decode(["Ronaldo", "won", "'t", "score", "more", "than", "30", "goals", "for", "Juve", "."])
```

The ```decode()``` method also can conduct decoding at document level (takes list of list of ```str``` as input) or corpus level (takes list of list of list of ```str``` as input).

The ```decoder_wrapper``` method can be customized by choosing a different pre-trained model or passing an additional ```configs``` file as:
```
model = decoder_wrapper(URL_OR_PATH_TO_CHECKPOINT, configs)
```
And you can access the config options by:
```
lightner decode -h
```

### Console

After installing and downloading the pre-trained mdoels, conduct the inference by
```
lightner decode -m MODEL_FILE -i INPUT_FILE -o OUTPUT_FILE
```

You can find more options by:
```
lightner decode -h
```

The current accepted paper format is as below (tokenized by line break and ```-DOCSTART-``` is optional):
```
-DOCSTART-

Ronaldo
won
't
score
more
30
goals
for
Juve
.
```

The output would be:
```
<PER> Ronaldo </PER> won 't score more than 30 goals for <ORG> Juve </ORG> .
```

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

LightNER-0.2.0.tar.gz (19.1 kB view details)

Uploaded Source

File details

Details for the file LightNER-0.2.0.tar.gz.

File metadata

  • Download URL: LightNER-0.2.0.tar.gz
  • Upload date:
  • Size: 19.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/38.4.0 requests-toolbelt/0.8.0 tqdm/4.23.4 CPython/3.6.4

File hashes

Hashes for LightNER-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b67194d465d694ad20c8fcfa5ac4c633654092194a1aee72b322665930c01720
MD5 b48939951a8c8b0ff704b3c8ebcca2f5
BLAKE2b-256 58a6bc0caec060c1be622a1e709436db8f9f579868b0fec6a0d16bbfdb32f0f4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page