Skip to main content

About Mass-media text processing application for your Relation Extraction task, powered by AREkit.

Project description

ARElight 0.24.0

Open In Colab

:point_right: DEMO :point_left:

ARElight is an application for a granular view onto sentiments between mentioned named entities in texts.

Installation

pip install git+https://github.com/nicolay-r/arelight@v0.24.0

Usage: Inference

Open In Colab

Infer sentiment attitudes from text file in English:

python3 -m arelight.run.infer  \
    --sampling-framework "arekit" \
    --ner-framework "deeppavlov" \
    --ner-model-name "ner_ontonotes_bert" \
    --ner-types "ORG|PERSON|LOC|GPE" \
    --terms-per-context 50 \
    --sentence-parser "nltk:english" \
    --tokens-per-context 128 \
    --bert-framework "opennre" \
    --batch-size 10 \
    --pretrained-bert "bert-base-cased" \
    --bert-torch-checkpoint "ra4-rsr1_bert-base-cased_cls.pth.tar" \
    --backend "d3js_graphs" \
    --docs-limit 500 \
    -o "output" \
    --from-files "<PATH-TO-TEXT-FILE>"

NOTE: Applying ARElight for non-english texts

Parameters

The complete documentation is avalable via -h flag:

python3 -m arelight.run.infer -h

Parameters:

  • sampling-framework we consider only arekit framework by default.
    • from-files -- list of filepaths to the related documents.
      • for the .csv files we consider that each line of the particular column as a separated document.
        • csv-sep -- separator between columns.
        • csv-column -- name of the column in CSV file.
    • collection-name -- name of the result files based on sampled documents.
    • terms-per-context -- total amount of words for a single sample.
    • sentence-parser -- parser utilized for document split into sentences; list of the [supported parsers].
    • synonyms-filepath -- text file with listed synonymous entries, grouped by lines. [example].
    • stemmer -- for words lemmatization (optional); we support [PyMystem].
    • NER parameters:
      • ner-framework -- type of the framework:
      • ner-model-name -- model name within utilized NER framework.
      • ner-types -- list of types to be considered for annotation, separated by |.
    • docs-limit -- the total limit of documents for sampling.
    • Translation specific parameters
      • translate-framework -- text translation backend (optional); we support [googletrans]
      • translate-entity -- (optional) source and target language supported by backend, separated by :.
      • translate-text -- (optional) source and target language supported by backend, separated by :.
  • bert-framework -- samples classification framework; we support [OpenNRE].
    • text-b-type -- (optional) NLI or None [supported].
    • pretrained-bert -- pretrained state name.
    • batch-size -- amount of samples per single inference iteration.
    • tokens-per-context -- size of input.
    • bert-torch-checkpoint -- fine-tuned state.
    • device-type -- cpu or gpu.
    • labels-fmt -- list of the mappings from label to integer value; is a p:1,n:2,u:0 by default, where:
      • p -- positive label, which is mapped to 1.
      • n -- negative label, which is mapped to 2.
      • u -- undefined label (optional), which is mapped to 0.
  • backend -- type of the backend (d3js_graphs by default).
    • host -- port on which we expect to launch localhost server.
    • label-names -- default mapping is p:pos,n:neg,u:neu.
  • -o -- output folder for result collections and demo.

Framework parameters mentioned above as well as their related setups might be ommited.

To Launch Graph Builder for D3JS and (optional) start DEMO server for collections in output dir:

cd output && python -m http.server 8000

Finally, you may follow the demo page at http://0.0.0.0:8000/

image

Layout of the files in output

output/
├── description/
    └── ...         // graph descriptions in JSON.
├── force/
    └── ...         // force graphs in JSON.
├── radial/
    └── ...         // radial graphs in JSON.
└── index.html      // main HTML demo page. 

Usage: Graph Operations

For graph analysis you can perform several graph operations by this script:

  1. Arguments mode:
python3 -m arelight.run.operations \
	--operation "<OPERATION-NAME>" \
	--graph_a_file output/force/boris.json \
  	--graph_b_file output/force/rishi.json \
  	--weights y \
  	-o output \
  	--description "[OPERATION] between Boris Johnson and Rishi Sunak on X/Twitter"
  1. Interactive mode:
python3 -m arelight.run.operations

arelight.run.operations allows you to operate ARElight's outputs using graphs: you can merge graphs, find their similarities or differences.

Parameters

  • --graph_a_file and --graph_b_file are used to specify the paths to the .json files for graphs A and B, which are used in the operations. These files should be located in the <your_output/force> folder.
  • --name -- name of the new graph.
  • --description -- description of the new graph.
  • --host -- determines the server port to host after the calculations.
  • -o -- option allows you to specify the path to the folder where you want to store the output. You can either create a new output folder or use an existing one that has been created by ARElight.

Parameter operation

Preparation

Consider that you used ARElight script for X/Twitter to infer relations from messages of UK politicians Boris Johnson and Rishi Sunak:

python3 -m arelight.run.infer ...other arguments... \
	-o output --collection-name "boris" --from-files "twitter_boris.txt"
	
python3 -m arelight.run.infer  ...other arguments... \
	-o output --collection-name "rishi" --from-files "twitter_rishi.txt"

According to the results section, you will have output directory with 2 files force layout graphs:

output/
├── force/
    ├──  rishi.json
    └──  boris.json

List of Operations

You can do the following operations to combine several outputs, ot better understand similarities, and differences between them:

UNION $(G_1 \cup G_2)$ - combine multiple graphs together.

  • The result graph contains all the vertices and edges that are in $G_1$ and $G_2$. The edge weight is given by $W_e = W_{e1} + W_{e2}$, and the vertex weight is its weighted degree centrality: $W_v = \sum_{e \in E_v} W_e(e)$.
    python3 -m arelight.run.operations --operation UNION \
        --graph_a_file output/force/boris.json \
        --graph_b_file output/force/rishi.json \
        --weights y -o output --name boris_UNION_rishi \
        --description "UNION of Boris Johnson and Rishi Sunak Twits"
    
    union

INTERSECTION $(G_1 \cap G_2)$ - what is similar between 2 graphs?

  • The result graph contains only the vertices and edges common to $G_1$ and $G_2$. The edge weight is given by $W_e = \min(W_{e1},W_{e2})$, and the vertex weight is its weighted degree centrality: $W_v = \sum_{e \in E_v} W_e(e)$.
    python3 -m arelight.run.operations --operation INTERSECTION \
        --graph_a_file output/force/boris.json \
        --graph_b_file output/force/rishi.json \
        --weights y -o output --name boris_INTERSECTION_rishi \
        --description "INTERSECTION between Twits of Boris Johnson and Rishi Sunak"
    
    intersection

DIFFERENCE $(G_1 - G_2)$ - what is unique in one graph, that another graph doesn't have?

  • NOTE: this operation is not commutative $(G_1 - G_2) ≠ G_2 - G_1)$)_
  • The results graph contains all the vertices from $G_1$ but only includes edges from $E_1$ that either don't appear in $E_2$ or have larger weights in $G_1$ compared to $G_2$. The edge weight is given by $W_e = W_{e1} - W_{e2}$ if $e \in E_1$, $e \in E_1 \cap E_2$ and $W_{e1}(e) > W_{e2}(e)$.
    python3 -m arelight.run.operations --operation DIFFERENCE \
        --graph_a_file output/force/boris.json \
        --graph_b_file output/force/rishi.json \
        --weights y -o output --name boris_DIFFERENCE_rishi \
        --description "Difference between Twits of Boris Johnson and Rishi Sunak"
    
    difference

Parameter weights

You have the option to specify whether to include edge weights in calculations or not. These weights represent the frequencies of discovered edges, indicating how often a relation between two instances was found in the text analyzed by ARElight.

  • --weights
    • y: the result will be based on the union, intersection, or difference of these frequencies.
    • n: all weights of input graphs will be set to 1. In this case, the result will reflect the union, intersection, or difference of the graph topologies, regardless of the frequencies. This can be useful when the existence of relations is more important to you, and the number of times they appear in the text is not a significant factor.

    Note that using or not using the weights option may yield different topologies:

    weights

Powered by

How to cite

Our one and my personal interest is to help you better explore and analyze attitude and relation extraction related tasks with ARElight. A great research is also accompanied with the faithful reference. if you use or extend our work, please cite as follows:

@inproceedings{rusnachenko2024arelight,
  title={ARElight: Context Sampling of Large Texts for Deep Learning Relation Extraction},
  author={Rusnachenko, Nicolay and Liang, Huizhi and Kolomeets, Maxim and Shi, Lei},
  booktitle={European Conference on Information Retrieval},
  year={2024},
  organization={Springer}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arelight-0.24.0.tar.gz (113.2 kB view details)

Uploaded Source

Built Distribution

arelight-0.24.0-py3-none-any.whl (111.0 kB view details)

Uploaded Python 3

File details

Details for the file arelight-0.24.0.tar.gz.

File metadata

  • Download URL: arelight-0.24.0.tar.gz
  • Upload date:
  • Size: 113.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for arelight-0.24.0.tar.gz
Algorithm Hash digest
SHA256 9e5100d83417f08ebbaddc057bdb1ae44169124290a8101afbc947b2b6b94a0b
MD5 7ed57379683d61fec5b9ee7d4901b23a
BLAKE2b-256 337fc8d2411c8ccc167e662cd3eb96f7afddbe8b664f4941d5f8d8ac37e80711

See more details on using hashes here.

File details

Details for the file arelight-0.24.0-py3-none-any.whl.

File metadata

  • Download URL: arelight-0.24.0-py3-none-any.whl
  • Upload date:
  • Size: 111.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for arelight-0.24.0-py3-none-any.whl
Algorithm Hash digest
SHA256 caaa92339e2239bf10e59bd4f11c6a3c3ca40aeaeb756dae4a488d37933a8674
MD5 9a29b5dd934c5525beaeaae93d0dd984
BLAKE2b-256 ddc3619ca9a59b3f0f5af14969b63c646c15f0686e0ff3bd965ff24de52aad39

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page