The bareun python library using grpc
Project description
What is this?
bareunpy
is the python 3 library for bareun.
Bareun is a Korean NLP, which provides tokenizing, POS tagging for Korean.
How to install
pip3 install bareunpy
How to get bareun
- Go to https://bareun.ai/.
- With registration, for the first time, you can get a API-KEY to use it freely.
- With API-KEY, you can install the
bareun1
server. - Or you can make a call to use this
bareunpy
library to any servers.
- Or use docker image. See https://hub.docker.com/r/bareunai/bareun
docker pull bareunai/bareun:latest
How to use, tagger
import sys
import google.protobuf.text_format as tf
from bareunpy import Tagger
# You can get an API-KEY from https://bareun.ai/
# Please note that you need to sign up and verify your email.
# 아래에 "https://bareun.ai/"에서 이메일 인증 후 발급받은 API KEY("koba-...")를 입력해주세요. "로그인-내정보 확인"
API_KEY = "koba-ABCDEFG-1234567-LMNOPQR-7654321" # <- 본인의 API KEY로 교체(Replace this with your own API KEY)
# If you have your own localhost bareun.
my_tagger = Tagger(API_KEY, 'localhost')
# or if you have your own bareun which is running on 10.8.3.211:15656.
my_tagger = Tagger(API_KEY, '10.8.3.211', 15656)
# print results.
res = tagger.tags(["안녕하세요.", "반가워요!"])
# get protobuf message.
m = res.msg()
tf.PrintMessage(m, out=sys.stdout, as_utf8=True)
print(tf.MessageToString(m, as_utf8=True))
print(f'length of sentences is {len(m.sentences)}')
## output : 2
print(f'length of tokens in sentences[0] is {len(m.sentences[0].tokens)}')
print(f'length of morphemes of first token in sentences[0] is {len(m.sentences[0].tokens[0].morphemes)}')
print(f'lemma of first token in sentences[0] is {m.sentences[0].tokens[0].lemma}')
print(f'first morph of first token in sentences[0] is {m.sentences[0].tokens[0].morphemes[0]}')
print(f'tag of first morph of first token in sentences[0] is {m.sentences[0].tokens[0].morphemes[0].tag}')
## Advanced usage.
for sent in m.sentences:
for token in sent.tokens:
for m in token.morphemes:
print(f'{m.text.content}/{m.tag}:{m.probability}:{m.out_of_vocab})
# get json object
jo = res.as_json()
print(jo)
# get tuple of pos tagging.
pa = res.pos()
print(pa)
# another methods
ma = res.morphs()
print(ma)
na = res.nouns()
print(na)
va = res.verbs()
print(va)
# custom dictionary
cust_dic = tagger.custom_dict("my")
cust_dic.copy_np_set({'내고유명사', '우리집고유명사'})
cust_dic.copy_cp_set({'코로나19'})
cust_dic.copy_cp_caret_set({'코로나^백신', '"독감^백신'})
cust_dic.update()
# laod prev custom dict
cust_dict2 = tagger.custom_dict("my")
cust_dict2.load()
tagger.set_domain('my')
tagger.pos('코로나19는 언제 끝날까요?')
How to use, tokenizer
import sys
import google.protobuf.text_format as tf
from bareunpy import Tokenizer
# You can get an API-KEY from https://bareun.ai/
# Please note that you need to sign up and verify your email.
# 아래에 "https://bareun.ai/"에서 이메일 인증 후 발급받은 API KEY("koba-...")를 입력해주세요. "로그인-내정보 확인"
API_KEY = "koba-ABCDEFG-1234567-LMNOPQR-7654321" # <- 본인의 API KEY로 교체(Replace this with your own API KEY)
# If you have your own localhost bareun.
my_tokenizer = Tokenizer(API_KEY, 'localhost')
# or if you have your own bareun which is running on 10.8.3.211:15656.
my_tokenizer = Tagger(API_KEY, '10.8.3.211', 15656)
# print results.
tokenized = tokenizer.tokenize_list(["안녕하세요.", "반가워요!"])
# get protobuf message.
m = tokenized.msg()
tf.PrintMessage(m, out=sys.stdout, as_utf8=True)
print(tf.MessageToString(m, as_utf8=True))
print(f'length of sentences is {len(m.sentences)}')
## output : 2
print(f'length of tokens in sentences[0] is {len(m.sentences[0].tokens)}')
print(f'length of segments of first token in sentences[0] is {len(m.sentences[0].tokens[0].segments)}')
print(f'tagged of first token in sentences[0] is {m.sentences[0].tokens[0].tagged}')
print(f'first segment of first token in sentences[0] is {m.sentences[0].tokens[0].segments[0]}')
print(f'hint of first morph of first token in sentences[0] is {m.sentences[0].tokens[0].segments[0].hint}')
## Advanced usage.
for sent in m.sentences:
for token in sent.tokens:
for m in token.segments:
print(f'{m.text.content}/{m.hint})
# get json object
jo = tokenized.as_json()
print(jo)
# get tuple of segments
ss = tokenized.segments()
print(ss)
ns = tokenized.nouns()
print(ns)
vs = tokenized.verbs()
print(vs)
# postpositions: 조사
ps = tokenized.postpositions()
print(ps)
# Adverbs, 부사
ass = tokenized.adverbs()
print(ass)
ss = tokenized.symbols()
print(ss)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
bareunpy-1.6.4.tar.gz
(12.7 kB
view details)
Built Distribution
bareunpy-1.6.4-py3-none-any.whl
(16.6 kB
view details)
File details
Details for the file bareunpy-1.6.4.tar.gz
.
File metadata
- Download URL: bareunpy-1.6.4.tar.gz
- Upload date:
- Size: 12.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.9.12 Linux/4.15.0-208-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7632a426eb90065cdd6120c7fe6ea35c33c96550f676e0d9079d490815b0178f |
|
MD5 | 582b7f792b7720d041ada6e0a5fcedfc |
|
BLAKE2b-256 | 60aa9b33999352db2dd2ccba6eb5921747c4aebd9bac0cf1094391532e13d111 |
File details
Details for the file bareunpy-1.6.4-py3-none-any.whl
.
File metadata
- Download URL: bareunpy-1.6.4-py3-none-any.whl
- Upload date:
- Size: 16.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.9.12 Linux/4.15.0-208-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 05aea51ea95e89fbf9eccd8414f78d476bd90986bb31b06e2387025f14459ec6 |
|
MD5 | 0715c69cdd10ce5f921927ec2e71bc7a |
|
BLAKE2b-256 | dee9129919fb5ff5532836731895266533e5c2e1bdec1f6e812c499681c3e2ec |