Skip to main content

This project is a convenient part of the NLP project, including several already exposed projects such as summy and text processing.

Project description

Abstractive

This project is a convenient part of the NLP project, including several already exposed projects such as summy and text processing. One of the main functions is sentence token in japanese.

The open resources we use are:

Examples

The sentence token example.

>>> from util_ds.nlp.sentence_token import sentenceToken
>>> sentence = "ドイツ連邦共和国(ドイツれんぽうきょうわこく、独: Bundesrepublik Deutschland)、通称ドイツ(独: Deutschland)は、中央ヨーロッパ西部に位置する連邦共和制国家。首都および最大の都市(英語版)はベルリン[1]。南がスイスとオーストリア、北にデンマーク、西をフランスとオランダとベルギーとルクセンブルク、東はポーランドとチェコとそれぞれ国境を接する。"
>>> sentences = sentenceToken("japanese", sentence)
>>> ["ドイツ連邦共和国(ドイツれんぽうきょうわこく、独: Bundesrepublik Deutschland)、通称ドイツ(独: Deutschland)は、中央ヨーロッパ西部に位置する連邦共和制国家。首都および最大の都市(英語版)はベルリン[1]。", "南がスイスとオーストリア、北にデンマーク、西をフランスとオランダとベルギーとルクセンブルク、東はポーランドとチェコとそれぞれ国境を接する。"]

Notes: The above functions can basically be replaced by the following functions.

>>> import re
>>> # Didn't consider the more complicated case here.
>>> def sentenceToken(language, text):
>>>     pattern = '([。!?\?])([^」』)])'
>>>     sentences = re.sub(pattern, r"\1\n\2", text).split("\n")
>>>     sentences = list(map(lambda x: x.strip(), sentences))
>>>     sentences = list(filter(lambda x: x!="", sentences))
>>>     return sentences

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

util_ds-0.5.3.tar.gz (38.2 kB view hashes)

Uploaded Source

Built Distribution

util_ds-0.5.3-py3-none-any.whl (60.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page