Skip to main content

HanLP: Han Language Processing

Project description

HanLP: Han Language Processing

中文 | 日本語 | Docs | Forum

The multilingual NLP library for researchers and companies, built on PyTorch and TensorFlow 2.x, for advancing state-of-the-art deep learning techniques in both academia and industry. HanLP was designed from day one to be efficient, user-friendly and extendable.

Thanks to open-access corpora like Universal Dependencies and OntoNotes, HanLP 2.1 now offers 10 joint tasks on 130 languages: tokenization, lemmatization, part-of-speech tagging, token feature extraction, dependency parsing, constituency parsing, semantic role labeling, semantic dependency parsing, abstract meaning representation (AMR) parsing.

For end users, HanLP offers light-weighted RESTful APIs and native Python APIs.

RESTful APIs

Tiny packages in several KBs for agile development and mobile applications. Although anonymous users are welcomed, an auth key is suggested and a free one can be applied here under the CC BY-NC-SA 4.0 license.

Click to expand tutorials for RESTful APIs

Python

pip install hanlp_restful

Create a client with our API endpoint and your auth.

from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://hanlp.hankcs.com/api', auth=None, language='mul') # mul: multilingual, zh: Chinese

Java

Insert the following dependency into your pom.xml.

<dependency>
  <groupId>com.hankcs.hanlp.restful</groupId>
  <artifactId>hanlp-restful</artifactId>
  <version>0.0.15</version>
</dependency>

Create a client with our API endpoint and your auth.

HanLPClient HanLP = new HanLPClient("https://hanlp.hankcs.com/api", null, "mul"); // mul: multilingual, zh: Chinese

Quick Start

No matter which language you use, the same interface can be used to parse a document.

HanLP.parse(
    "In 2021, HanLPv2.1 delivers state-of-the-art multilingual NLP techniques to production environments. 2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。2021年 HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。")

See docs for visualization, annotation guidelines and more details.

Native APIs

pip install hanlp

HanLP requires Python 3.6 or later. GPU/TPU is suggested but not mandatory.

Quick Start

import hanlp

HanLP = hanlp.load(hanlp.pretrained.mtl.UD_ONTONOTES_TOK_POS_LEM_FEA_NER_SRL_DEP_SDP_CON_XLMR_BASE)
print(HanLP(['In 2021, HanLPv2.1 delivers state-of-the-art multilingual NLP techniques to production environments.',
             '2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
             '2021年 HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。']))
  • In particular, the Python HanLPClient can also be used as a callable function following the same semantics. See docs for visualization, annotation guidelines and more details.
  • To process Chinese or Japanese, HanLP provides mono-lingual models in each language which significantly outperform the multi-lingual model. See docs for the list of models.

Train Your Own Models

To write DL models is not hard, the real hard thing is to write a model able to reproduce the scores in papers. The snippet below shows how to surpass the state-of-the-art tokenizer in 6 minutes.

tokenizer = TransformerTaggingTokenizer()
save_dir = 'data/model/cws/sighan2005_pku_bert_base_96.7'
tokenizer.fit(
    SIGHAN2005_PKU_TRAIN_ALL,
    SIGHAN2005_PKU_TEST,  # Conventionally, no devset is used. See Tian et al. (2020).
    save_dir,
    'bert-base-chinese',
    max_seq_len=300,
    char_level=True,
    hard_constraint=True,
    sampler_builder=SortingSamplerBuilder(batch_size=32),
    epochs=3,
    adam_epsilon=1e-6,
    warmup_steps=0.1,
    weight_decay=0.01,
    word_dropout=0.1,
    seed=1660853059,
)
tokenizer.evaluate(SIGHAN2005_PKU_TEST, save_dir)

The result is guaranteed to be 96.73 as the random seed is fixed. Different from some overclaiming papers and projects, HanLP promises every single digit in our scores is reproducible. Any issues on reproducibility will be treated and solved as a top-priority fatal bug.

Performance

The performance of multi-task learning models is shown in the following table.

langcorporamodeltokposnerdepconsrlsdplemfeaamr
finecoarsectbpku863udpkumsraontonotesSemEval16DMPASPSD
mulUD2.7
OntoNotes5
small98.62----93.23--74.4279.1076.8570.63-91.1993.6785.3487.7184.51-
base98.97----90.32--80.3278.7471.2373.63-92.6096.0481.1985.0882.13-
zhopensmall97.25-96.66-----95.0084.5787.6273.4084.57------
base97.50-97.07-----96.0487.1189.8477.7887.11------
closesmall96.7095.9396.8797.5695.05-96.2295.7476.7984.4488.1375.8174.28------
base97.5296.4496.9997.5995.29-96.4895.7277.7785.2988.5776.5273.76------
ernie96.9597.2996.7697.6495.22-97.3196.4777.9585.6789.1778.5174.10------
  • Multi-task learning models often under-perform their single-task learning counterparts according to our latest research. Similarly, mono-lingual models often outperform multi-lingual models. Therefore, we strongly recommend the use of a single-task mono-lingual model if you are targeting at high accuracy instead of faster speed.
  • A state-of-the-art AMR model has been released.

Citing

If you use HanLP in your research, please cite our EMNLP paper:

@inproceedings{he-choi-2021-stem,
    title = "The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders",
    author = "He, Han and Choi, Jinho D.",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.451",
    pages = "5555--5577",
    abstract = "Multi-task learning with transformer encoders (MTL) has emerged as a powerful technique to improve performance on closely-related tasks for both accuracy and efficiency while a question still remains whether or not it would perform as well on tasks that are distinct in nature. We first present MTL results on five NLP tasks, POS, NER, DEP, CON, and SRL, and depict its deficiency over single-task learning. We then conduct an extensive pruning analysis to show that a certain set of attention heads get claimed by most tasks during MTL, who interfere with one another to fine-tune those heads for their own objectives. Based on this finding, we propose the Stem Cell Hypothesis to reveal the existence of attention heads naturally talented for many tasks that cannot be jointly trained to create adequate embeddings for all of those tasks. Finally, we design novel parameter-free probes to justify our hypothesis and demonstrate how attention heads are transformed across the five tasks during MTL through label analysis.",
}

License

Codes

HanLP is licensed under Apache License 2.0. You can use HanLP in your commercial products for free. We would appreciate it if you add a link to HanLP on your website.

Models

Unless otherwise specified, all models in HanLP are licensed under CC BY-NC-SA 4.0.

References

https://hanlp.hankcs.com/docs/references.html

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hanlp-2.1.0b57.tar.gz (499.4 kB view hashes)

Uploaded Source

Built Distribution

hanlp-2.1.0b57-py3-none-any.whl (651.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page