Skip to main content

No project description provided

Project description

Text2Story main package

The Text2Story main package contains the main classes and methods for the T2S pipeline: from text to formal representation to visualization or other representation.

  • Relation to Brat2Viz The Text2Story package is a generalization of Brat2Viz and should in fact contain all the funcionalities and variants of the T2S project output.

Table of Contents

  1. Getting Start.
  2. The Framework Structure.
  3. The Annotators.
  4. Installation.
  5. The Web App.

1. Getting Started

The main goal of the text2story is to extract narrative from raw text. The narrative components comprise events, the participants (or participants) in the events, and the time expressions.

  • Event: Eventuality that happens or occurs or state or circumstance that is temporally relevant
  • Time: Temporal expressions that represent units of time.
  • Participants: Named entities, or participants, that play an important role in the event or state.

These elements relate to each other by some relations, like Semantic Role Links and Objectal Links.

  • Objectal Links: It states how two discourse entities are referentially related to one another. For instance, there is the "identity" objectal link, which links entities that refer to the same referents, and there is the "part of" objectal link, which links a referent that is part of another.
  • Semantic Role Links: The identification of the way an entity is involved/participates in an eventuality. For instance, there is the "agent" semantic role link, in which an event is linked to a participant that intentionally caused it.

A simple code to perform the extraction of the narrative elements, and the two type of relations described above is like the following.

import text2story as t2s # Import the package

t2s.load("en") # Load the pipelines for the English language

text = 'On Friday morning, Max Healthcare, which runs 10 private hospitals around Delhi, put out an "SOS" message, saying it had less than an hour\'s supply remaining at two of its sites. The shortage was later resolved.'

doc = t2s.Narrative('en', text, '2020-05-30')

doc.extract_participants() # Extraction done with all tools.
doc.extract_participants('spacy', 'nltk') # Extraction done with the SPACY and NLTK tools.
doc.extract_participants('allennlp') # Extraction done with just the ALLENNLP tool.

doc.extract_times() # Extraction done with all tools 

doc.extract_objectal_links() # Extraction of objectal links from the text with all tools (needs to be done after extracting participants, since it requires participants to make the co-reference resolution)

doc.extract_events() # Extraction of events with all tools
doc.extract_semantic_role_link() # Extraction of semantic role links with all tools (should be done after extracting events since most semantic relations are between an participant and an event)

ann_str = doc.ISO_annotation() # Outputs ISO annotation in .ann format (txt) in a file called 'annotations.ann'
with open('annotations.ann', "w") as fd:
    fd.write(ann_str)

2. Framework Structure

.
│   README.md
|   env.yml
│   requirements.txt
|
└──Text2Story
      └──core
      │   │   annotator.py (META-annotator)
      │   │   entity_structures.py (ParticipantEntity, TimexEntity and EventEntity classes)
      │   |   exceptions.py (Exceptions raised by the package)
      │   |   link_structures.py (TemporalLink, AspectualLink, SubordinationLink, SemanticRoleLink and ObjectalLink classes)
      │   |   narrative.py (Narrative class)
      │   |   utils.py (Utility functions)
      │
      └───readers (tools to support the reading of some specific kind of annotated corpus)
      |   | read.py (Abstract class: defines the structure of a reader)
      |   | TokenCorpus (Internal representation of a token, its annotations and relations)
      |   | read_brat.py (it reads annotated file of type supported the BRAT annotation tool)
      |   | read_ecb.py (it processes ecb+ corpus format)
      |   | read_framenet.py (it processes Framenet corpus format)
      |   | read_propbank.py (it processes Propbank corpus format)  
      └───annotators (tools supported by the package to do the extractions)
      |   |   NLTK
      |   │   PY_HEIDELTIME
      |   |   BERTNERPT
      |   |   TEI2GO (requires the manual installation for each used model)
      |   |   SPACY
      |   |   ALLENNLP
      └───experiments
      |   |   evaluation.py (it performs batch evaluation of narrative corpora)
      |   |   metrics.py (it implements some specific metrics, like relaxed recall and relaxed precision)
      |   |   stats.py (it counts some narrative elements, and produce some stats of the narrative corpora)
      └───visualization
      |   |   brat2viz: a module that converts a BRAT annotation file to visual representations, like  Message Sequence Chart (MSC) and (Knowledge Graph) KG
      |   |   viz: a module that contain bubble_tikz.py, a class dedicate to build Bubble diagrams
      
└── Webapp
      |  backend.py
      |  main.py
      |  session_state.py
      |  input_phase.py
      |  output_phase.py

3. The Annotators

All annotators have the same interface: they implement a function called 'extract_' followed by the name of the particular extraction. E.g., if they are extracting participants, then they implement a function named 'extract_participants', with two arguments: the language of text and the text itself.

Extractions Interface Supporting tools
Participant extract_participants(lang, text) SPACY, NLTK , ALLENNLP, BERTNERPT
Timexs extract_timexs(lang, text, publication_time) PY_HEIDELTIME, TEI2GO (requires the manual installation for each used model)
ObjectalLink extract_objectal_links(lang, text, publication_time) ALLENNLP
Event extract_events(lang, text, publication_time) ALLENNLP
SemanticLink extract_semantic_role_link(lang, text, publication_time) ALLENNLP

To change some model used in the supported tools, just go to text2story/annotators/ANNOTATOR_TO_BE_CHANGED and change the model in the file: __init__.py.

To add a new tool, add a folder to text2story/annotators with the name of the annotator all capitalized (just a convention; useful to avoid name colisions). In that folder, create a file called '__init__.py' and there implement a function load() and the desired extraction functions. The function load() should load the pipeline to some variable defined by you, so that, every time we do an extraction, we don't need to load the pipeline all over again. (Implement it, even if your annotator doesn't load anything. Leave it with an empty body.)

In the text2story.annotators.__init__.py file, add a call to the load() function, and to the extract functions. (See the already implemented tools for guidance.)

Specifically, for annotators like TEI2GO (detailed in its documentation here), users need to manually install the required model. For example, if you plan to use the English model, execute the following command before loading it into 'text2story':

pip install https://huggingface.co/hugosousa/en_tei2go/resolve/main/en_tei2go-any-py3-none-any.whl

And it should be done.

PS: Don't forget to normalize the labels to our semantic framework!

4. Installation

4.1 Linux / Ubuntu

The installation requires graphviz software, the latex suite and the software poppler to convert pdf to png. In Linux, to install these software open a terminal and type the following commands:

sudo apt-get install graphviz libgraphviz-dev texlive-latex-base  texlive-latex-extra poppler-utils

After that, create a virtual enviroment using venv or other tool of your preference. For instance, using the following command in the prompt line:

$ python3 -m venv venv

Then, activate the virtual enviroment in the prompt line. Like, the following command:

$ source venv/bin/activate

After that, you are ready to install

4.2 Windows

First, make sure you have Microsoft C++ Build Tools. Then install graphviz software by download one suitable version in this link. Next, install the latex-suite like these tutorial explains. Then, install Popple packed for windows, which you download here.

Finnally, you can install text2story using pip. If it did not recognize the graphviz installation, then you can use the following command for pip (tested in pip == 21.1.1).

pip install text2story  --global-option=build_ext --global-option="-IC:\Program Files\Graphviz\include" --global-option="-LC:\Program Files\Graphviz\lib\"

For newer version of pip (tested in pip == 23.1.2), you can type the following command:

pip install --use-pep517  --config-setting="--global-option=build_ext"  --config-setting="--global-option=-IC:\Program Files\Graphviz\include" --config-setting="--global-option=-LC:\Program Files\Graphviz\lib"

Web App

#### Web app
```ssh
python backend.py
streamlit run main.py

and a page on your browser will open!

Project details


Release history Release notifications | RSS feed

This version

1.4.7

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text2story-1.4.7.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

text2story-1.4.7-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file text2story-1.4.7.tar.gz.

File metadata

  • Download URL: text2story-1.4.7.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for text2story-1.4.7.tar.gz
Algorithm Hash digest
SHA256 95d01373b279acef9c3fc0937df037ef734ef44808fff3ea5fdde99241b595f0
MD5 dc1fba8c146f9ab0d32660e4ad0a782f
BLAKE2b-256 1e5b59901a37da2f639e7d261b8e561bd4f078d492a0efc78bcb5d436185352d

See more details on using hashes here.

Provenance

File details

Details for the file text2story-1.4.7-py3-none-any.whl.

File metadata

  • Download URL: text2story-1.4.7-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for text2story-1.4.7-py3-none-any.whl
Algorithm Hash digest
SHA256 2f29c6912df4190a320ce1be8beb00c16176be5f17658565c523469e3908c200
MD5 d47d70ec1f35b47b61a970ee4e060eb1
BLAKE2b-256 42558cb513c62f1022a134dd9438e954c8a87ef463be26ccc2f363fa587c08f4

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page