A preprocessing and evaluation tools for Japanese cohesion analysis
Project description
Cohesion Tools
Requirements
- Python: 3.9+
- Dependencies: See pyproject.toml.
Installation
pip install cohesion-tools # or cohesion-tools[eval]
Usage
Evaluating Predicted Documents
from pathlib import Path
from typing import List
from rhoknp import Document
from rhoknp.cohesion import ExophoraReferentType
from cohesion_tools.evaluators import CohesionEvaluator, CohesionScore
documents: List[Document] = [Document.from_knp(path.read_text()) for path in Path("your/dataset").glob("*.knp")]
predicted_documents = your_model(documents)
scorer = CohesionEvaluator(
exophora_referent_types=[ExophoraReferentType(t) for t in ("著者", "読者", "不特定:人", "不特定:物")],
pas_cases=["ガ", "ヲ", "ニ"],
)
score: CohesionScore = scorer.run(predicted_documents=predicted_documents, gold_documents=documents)
score.to_dict() # Convert the evaluation result to a dictionary
score.export_csv("score.csv") # Export the evaluation result to `score.csv`
score.export_txt("score.txt") # Export the evaluation result to `score.txt`
Extracting Labels From Base Phrases
from pathlib import Path
from typing import Dict, List
from rhoknp import Document
from rhoknp.cohesion import ExophoraReferentType, Argument
from cohesion_tools.extractors import PasExtractor
pas_extractor = PasExtractor(
cases=["ガ", "ヲ", "ニ"],
exophora_referent_types=[ExophoraReferentType(t) for t in ("著者", "読者", "不特定:人", "不特定:物")],
)
examples = []
documents: List[Document] = [Document.from_knp(path.read_text()) for path in Path("your/dataset").glob("*.knp")]
for document in documents:
for base_phrase in document.base_phrases:
if pas_extractor.is_target(base_phrase) is True:
rels: Dict[str, List[Argument]] = pas_extractor.extract_rels(base_phrase)
examples.append(rels)
your_trainer.train(your_model, examples)
Reference
@inproceedings{ueda-etal-2020-bert,
title = {{BERT}-based Cohesion Analysis of {J}apanese Texts},
author = {Ueda, Nobuhiro and
Kawahara, Daisuke and
Kurohashi, Sadao},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
month = dec,
year = {2020},
address = {Barcelona, Spain (Online)},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2020.coling-main.114},
doi = {10.18653/v1/2020.coling-main.114},
pages = {1323--1333},
abstract = {The meaning of natural language text is supported by cohesion among various kinds of entities, including coreference relations, predicate-argument structures, and bridging anaphora relations. However, predicate-argument structures for nominal predicates and bridging anaphora relations have not been studied well, and their analyses have been still very difficult. Recent advances in neural networks, in particular, self training-based language models including BERT (Devlin et al., 2019), have significantly improved many natural language processing tasks, making it possible to dive into the study on analysis of cohesion in the whole text. In this study, we tackle an integrated analysis of cohesion in Japanese texts. Our results significantly outperformed existing studies in each task, especially about 10 to 20 point improvement both for zero anaphora and coreference resolution. Furthermore, we also showed that coreference resolution is different in nature from the other tasks and should be treated specially.}
}
@inproceedings{ueda-etal-2023-kwja,
title = {{KWJA}: A Unified {J}apanese Analyzer Based on Foundation Models},
author = {Ueda, Nobuhiro and
Omura, Kazumasa and
Kodama, Takashi and
Kiyomaru, Hirokazu and
Murawaki, Yugo and
Kawahara, Daisuke and
Kurohashi, Sadao},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
month = jul,
year = {2023},
address = {Toronto, Canada},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2023.acl-demo.52},
pages = {538--548},
abstract = {We present KWJA, a high-performance unified Japanese text analyzer based on foundation models.KWJA supports a wide range of tasks, including typo correction, word segmentation, word normalization, morphological analysis, named entity recognition, linguistic feature tagging, dependency parsing, PAS analysis, bridging reference resolution, coreference resolution, and discourse relation analysis, making it the most versatile among existing Japanese text analyzers.KWJA solves these tasks in a multi-task manner but still achieves competitive or better performance compared to existing analyzers specialized for each task.KWJA is publicly available under the MIT license at https://github.com/ku-nlp/kwja.}
}
License
This software is released under the MIT License, see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
cohesion_tools-0.7.4.tar.gz
(76.3 kB
view details)
Built Distribution
File details
Details for the file cohesion_tools-0.7.4.tar.gz
.
File metadata
- Download URL: cohesion_tools-0.7.4.tar.gz
- Upload date:
- Size: 76.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f6676a7a78a025ab46153d8ee7af375cec8121e74e13b43c83fa3621b5fe35f2 |
|
MD5 | aace5d49b54f240461c8cc4919894da7 |
|
BLAKE2b-256 | 1c155eb179eb710e3d5ebd83d5a6f31c1c251858950707897494e5465d224c98 |
File details
Details for the file cohesion_tools-0.7.4-py3-none-any.whl
.
File metadata
- Download URL: cohesion_tools-0.7.4-py3-none-any.whl
- Upload date:
- Size: 19.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 87af99e0ff6f979a4d384732be7fb1236a1765ad7f02d9c2a7eb8de37bb72f46 |
|
MD5 | aa9a2a206c2e9ca75355389a34e09241 |
|
BLAKE2b-256 | c519e2fbccccc8fd403d644d930b5f34d43a20c22c2fac289a18f07cdf1aa271 |