Skip to main content

An Extendable Evaluation Pipeline for Named Entity Drill-Down Analysis

Project description

Orbis quickstart

Orbis is a versatile framework for performing NEL evaluation analyses. It supports standard metrics such as precision, recall and F1-score and visualizes gold standard and annotator results in the context of the annotated document. Color coding the entities allows experts to quickly identify correct and incorrect annotations and the corresponding links to the KB that are also provided by Orbis. Due to the modular pipeline architecture used by Orbis different stages in the evaluation process can be easily modified, replaced or added.

Results of our first Orbis based drill-down analyses efforts were presented at the SEMANTiCS 2018 Conference in Vienna Odoni, Kuntschik, Braşoveanu, & Weichselbraun, 2018.

Prerequisites

To be able to develop and run Orbis you will need the following installed and configured on your system:

  • Python 3.7
  • Python Setup Tools
  • A Linux or Mac OS (Windows is untested)

Install

To use Orbis, download and install it from PyPI:

    $ python3 -m pip install -U orbis-eval['all'] --user

There are more extras options available but we recommend you use the all option. Only use the other options if you really know what you are doing.

    - all: Install all extras for Orbis. Recommended option
    - all_plugins: Install only all plugins for Orbis.
    - all_addons: Install only all addons for Orbis.
    - aggregation: Install only all aggregation plugins for Orbis.
    - evaluation: Install only all evaluation plugins for Orbis.
    - metrics: Install only all metrics plugins for Orbis.
    - scoring: Install only all scoring plugins for Orbis.
    - storage: Install only all storage plugins for Orbis.
    - "plugin or addon name": Install only the specified addon or plugin named.

Alternatively Orbis can be install by cloning the Repo and installing it manually. Plugins and addons must be installed seperatly.

    $ git clone https://github.com/orbis-eval/Orbis.git
    $ cd Orbis
    $ python3 setup.py install --user
    # or
    $ python setup.py install --user

Depending on your system and if you have Python 2 and Python 3 installed you either need to use python3 (like on Ubuntu) or maybe just python.

Test run

To get a first impression of orbis and for setting up the user folder run orbis-eval -t. You will be requested to set an orbis user folder. This folder will contain the evaluation run queue, the logs, the corpora and monocle data, the output and the documentation. Default location will be ~/orbis-eval in the user's home folder. An alternative location can be specified.

Running orbis-eval -t will run the test files located in ~/orbis-eval/queue/tests. These test configs are short evaluation runs for different annotators (AIDA, Babelfly, Recognyze and Spotlight). It is possible to just take one of these YAML files as template, copy them to the folder ~/orbis-eval/queue/activated and modify them to your own needs.

The results of the test runs as HTML can be found in your user orbis folder, e.g. ~/orbis-eval/output/html_pages

Orbis Addons

To run an Orbis addon Orbis provides a CLI that can be accessed by running orbis-addons or orbis-eval --run-addon. The menu will guide you to the addons and the addons mostly provide an own menu.

Run

After installation Orbis can be executed by running orbis-eval. The Orbis help can be called by using -h (orbis-eval -h). Running orbis-eval executes all yaml config files in the folder ~/orbis-eval/queue/activated. Before you can run an evaluation, please install the corpus you're referencing in the yaml config files using the repoman addon orbis-addons.

Configure evaluation runs

Orbis uses yaml files to configure the evaluation runs. These config files are located in the queue folder in the Orbis user directory ~/orbis-eval/queue/activated.

A YAML configuration file is divided into the stages of the pipeline:

  aggregation:
    service:
      name: aida
      location: web
    input:
      data_set:
        name: rss1
      lenses:
        - 3.5-entity_list_en.txt-14dec-0130pm
      mappings:
        - redirects-v2.json-15dec-1121am
      filters:
        - us_states_list_en-txt-12_jan_28-0913am

  evaluation:
    name: binary_classification_evaluation

  scoring:
    name: nel_scorer
    condition: overlap
    entities:
      - Person
      - Organization
      - Place
    ignore_empty: False

  metrics:
    name: binary_classification_metrics

  storage:
    - cache_webservice_results

Aggregation

The aggregation stage of orbis collects all the data needed for an evaluation run. This includes corpus, quering the annotator and mappings, lenses and filters used by monocle. The aggregation settings specify what service, dataset and what lenses, mappings and filters should be used.

    aggregation:
      service:
        name: aida
        location: web
      input:
        data_set:
          name: rss1
        lenses:
          - 3.5-entity_list_en.txt-14dec-0130pm
        mappings:
          - redirects-v2.json-15dec-1121am
        filters:
          - us_states_list_en-txt-12_jan_28-0913am

The service section of the yaml config specifies the name of the web service (annotation service). This should be the same (written the same) as the webservice plugin minus the orbis_plugin_aggregation_ prefix.

Location specifies where the annotations should come from. If it's set to web, then the aggregation plugin will attemt to query the webservice. If location is set to local, then the local cache (located in ~/orbis-eval/data/corpora/{corpus_name}/copmuted/{annotator_name}/) will be used assuming there is a cache to be used. If there is no cache, run the evaluation in web mode and add - cache_webservice_results to the storage section to build a cache.

    aggregation:
      service:
        name: aida
        location: web

The input section defines what corpus should be used (in the example rss1). The corpora name should be written the same as the corpus folder located in ~/orbis-eval/data/corpora/. Orbis will locate from there on automatically the corpus texts and the gold standard.

    input:
      data_set:
        name: rss1
      lenses:
        - 3.5 -entity_list_en.txt-14dec-0130pm
      mappings:
        - redirects-v2.json-15dec-1121am
      filters:
        - us_states_list_en-txt-12_jan_28-0913am

If needed, the lenses, mappings and filters can also be specified in the input section. These should be located in ~/orbis-eval/data/[filters|lenses|mappings] and should be specified in the section without the file ending.

Evaluation

The evaluator stage evaluates the the annotator results against the gold standard. The evaluation section defines what kind of evaluation should be used. The evaluator should have the same name as the evaluation plugin minus the orbis_plugin_evaluation_ prefix.

    evaluation:
      name: binary_classification_evaluation

Scoring

The scoring stage scores the evaluation according to specified conditions. These conditions are preset in the scorer and can be specified in the scoring section as well as what entity types should be scored. If no entity type is defined, all are scored. If one or more entity types are defined, then only those will be scored. Additionally ignore_empty can be set to define if the scorer should ignore empty annotation results or not. The scorer should have the same name as the scoring plugin minus the orbis_plugin_scoring_ prefix.

    scoring:
      name: nel_scorer
      condition: overlap
      entities:
        - Person
        - Organization
        - Place
      ignore_empty: False

Currently available conditions are:

  - simple:
    - same url
    - same entity type
    - same surface form

  - strict:
    - same url
    - same entity type
    - same surface form
    - same start
    - same end

  - overlap:
    - same url
    - same entity type
    - overlap

Metrics

The metrics stage calculates the metrics to analyze the evaluation. The metric should have the same name as the metrics plugin minus the orbis_plugin_metrics_ prefix.

    metrics:
      name: binary_classification_metrics

Storage

The storage stage defines what kind of output orbis should create. As allways, the storage should have the same name as the storage plugin minus the orbis_plugin_storage_ prefix.

    storage:
      - cache_webservice_results
      - csv_result_list
      - html_pages

Multiple storage options can be chosen and the ones in the example above are the recomended (at the moment working) possibilities.

Orbis addons can be called directly by appending the Addon name the orbis-addon command: orbis-addon repoman

Datasets

For NER/NEL tasks, evaluation datasets need to be in the NIF format. If this is not the case, feel free to use the following converter packages:

Local Development (Pycharm)

  1. Create a new project folder.
    mkdir Orbis
    
  2. Clone orbis-eval in the newly created folder.
    cd Orbis
    git clone https://github.com/orbis-eval/orbis_eval.git
    
  3. Open orbis-eval as a new project in Pycharm File->Open

Open Project

  1. Execute the script clone_plugins.sh
    cd orbis_eval
    ./clone_plugins.sh
    
  2. Attach all downloaded plugin/addon to the project in Pycharm File->Open

Attach Project Project Tab

  1. For every additional plugin/addon of your orbis-eval run-configuration file:
    • Clone the repository into the folder created in step 1.
    • Attach the project to your orbis-eval project.
  2. In Pycharm, go to File->Settings->Projects Dependencies. Select all plugins/addons as dependencies of orbis-eval. For every plugin/addon select orbis-eval as dependency.

Setup project dependencies

  1. In Pycharm, go to File->Settings->Projects Interpreter. Create a new Python interpreter within the project folder created in step 1 (You can use an existing intrerpreter as well). Make the interpreter available for all projects. Verify that all projects use this newly created interpreter.

Create Interpreter

  1. Install all dependencies of orbis-eval and additional plugins/addons (each project contains a requirements.txt with all dependencies).

Install Dependencies

  1. Add a Pycharm run configuration pointing to the main file of orbis-eval (orbis-eval/orbis-eval/main.py).
  2. If necessary, run repoman to create your gold-documents (Pycharm run configuration: orbis-eval/orbis-eval/interfaces/addons/main.py). Note: If you don't have any gold-documents yet, you can also use the sample files located in queue/tests in order to check that your installation is working (use the run configuration created in step 10 with -t as parameter)

Run Tests

  1. Run orbis-eval with the run configuration created in step 10. Note: The first execution will create an orbis-eval folder on the location of your choice. This folder contains all files to run an evaluation. Within orbis-eval/queue/ create a folder "activated". Create a configuration file in this folder.

Run Orbis Run Sample File Setup

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orbis_eval-2.3.5.tar.gz (105.7 kB view details)

Uploaded Source

Built Distribution

orbis_eval-2.3.5-py3-none-any.whl (124.7 kB view details)

Uploaded Python 3

File details

Details for the file orbis_eval-2.3.5.tar.gz.

File metadata

  • Download URL: orbis_eval-2.3.5.tar.gz
  • Upload date:
  • Size: 105.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.0.1 pkginfo/1.4.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.57.0 CPython/3.9.7

File hashes

Hashes for orbis_eval-2.3.5.tar.gz
Algorithm Hash digest
SHA256 466ca1e0f8bd6bf253326c88671e4c9c880a5175f21192fd8793f286c21fb2c6
MD5 073b1d8855a2e814738dc85903b26c61
BLAKE2b-256 847d3490800fea033336176de33208211f075038fa9129e1ecd99209349ea0c7

See more details on using hashes here.

File details

Details for the file orbis_eval-2.3.5-py3-none-any.whl.

File metadata

  • Download URL: orbis_eval-2.3.5-py3-none-any.whl
  • Upload date:
  • Size: 124.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.0.1 pkginfo/1.4.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.57.0 CPython/3.9.7

File hashes

Hashes for orbis_eval-2.3.5-py3-none-any.whl
Algorithm Hash digest
SHA256 4ee56ada126bbd09e8067f5a4da8187374368617c73fd7651ba0cb13af731409
MD5 1295ec753b66cfd5736106e62486d558
BLAKE2b-256 0932003a0490a3a5d276bababb60ed0c046e769c782491cde92434e9f5413dc0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page