Skip to main content

Open Source Network Embedding Evaluation toolkit

Project description

EvalNE: A Python library for evaluating Network Embedding methods on Link Prediction

Documentation Status contributions welcome MIT license made-with-python made-with-sphinx-doc

This repository provides the source code for EvalNE, an open-source Python library designed for assessing and comparing the performance of Network Embedding (NE) methods on Link Prediction (LP) tasks. The library intends to simplify this complex and time consuming evaluation process by providing automation and abstraction of tasks such as hyper-parameter tuning, selection of train and test edges, negative sampling, selection of the scoring function, etc.

The library can be used both as a command line tool and an API. In its current version, EvalNE can evaluate unweighted directed and undirected simple networks.

The library is maintained by Alexandru Mara (alexandru.mara(at)ugent.be). The full documentation of EvalNE is hosted by Read the Docs and can be found here.

For Methodologists

A command line interface in combination with a configuration file allows the user to evaluate any publicly available implementation of a NE method without the need to write additional code. These implementations can be obtained from libraries such as OpenNE or GEM as well as directly from the web pages of the authors e.g. Deepwalk, Node2vec, LINE, PRUNE, Metapath2vec, CNE.

EvalNE also includes the following LP heuristics for both directed and undirected networks (in and out node neighbourhoods), which can be used as baselines:

  • Random Prediction
  • Common Neighbours
  • Jaccard Coefficient
  • Adamic Adar Index
  • Preferential Attachment
  • Resource Allocation Index

For practitioners

When used as an API, EvalNE provides functions to:

  • Load and preprocess graphs
  • Obtain general graph statistics
  • Compute train/test/validation splits
  • Generate false edges
  • Evaluate link prediction from:
    • Node Embeddings
    • Edge Embeddings
    • Similarity scores (e.g. the ones given by LP heuristics)
  • Provides functions that compute edge embeddings from node feature vectors
    • Average
    • Hadamard
    • Weighted L1
    • Weighted L2
  • Any sklearn binary classifier can be used as a LP algorithm
  • Implements several accuracy metrics.
  • Includes parameter tuning subroutines

Instalation

The library has been tested on Python 2.7 and Python 3.6.

EvalNE depends on the following packages:

  • Numpy
  • Scipy
  • Sklearn
  • Matplotlib
  • Networkx 2.2

Before installing EvalNE make sure that pip and python-tk packages are installed on your system, this can be done by running:

# Python 2
sudo apt-get install python-pip
sudo apt-get install python-tk

# Python 3
sudo apt-get install python3-pip
sudo apt-get install python3-tk

Option 1: Install the library using pip:

# Python 2
pip install evalne

# Python 3
pip3 install evalne

Option 2: Cloning the code and installing:

  • Clone the EvalNE repository:

    git clone https://github.com/Dru-Mara/EvalNE.git
    cd EvalNE
    
  • Download strict library dependencies and install:

    # Python 2
    pip install -r requirements.txt
    sudo python setup.py install
    
    # Python 3
    pip3 install -r requirements.txt
    sudo python3 setup.py install
    

Check the installation by running simple_example.py or functions_example.py e.g.:

# Python 2
cd examples/
python simple_example.py

# Python 3
cd examples/
python3 simple_example.py

NOTE: In order to run the evaluator_example.py script, the OpenNE library, PRUNE and Metapath2Vec are required. The instructions for installing them are available here, here, and here, respectively. The instructions on how to run evaluations using .ini files are provided in the next section.

Usage

As a command line tool

The library takes as input an .ini configuration file. This file allows the user to specify the evaluation settings, from the methods and baselines to be evaluated to the edge embedding methods, parameters to tune or scores to report.

An example conf.ini file is provided describing the available options for each parameter. This file can be either modified to simulate different evaluation settings or used as a template to generate other .ini files.

Additional configuration (.ini) files are provided replicating the experimental sections of different papers in the NE literature. These can be found in different folders under examples/. One such configuration file is examples/node2vec/conf_node2vec.ini. This file simulates the link prediction experiments of the paper "Scalable Feature Learning for Networks" by A. Grover and J. Leskovec.

Once the configuration is set, the evaluation can be run as indicated in the next subsection.

Running the conf examples

In order to run the evaluations using the provided conf.ini or any other .ini file, the following steps are necessary:

  1. Download/Install the methods you want to test:

  2. Download the datasets used in the examples:

  3. Set the correct dataset paths in the INPATHS option of the corresponding .ini file. And the correct method paths under METHODS_OPNE and/or METHODS_OTHER options.

  4. Run the evaluation:

    # For conf.ini run:
    python evalne ./examples/conf.ini
    
    # For conf_node2vec.ini run:
    python evalne ./examples/node2vec/conf_node2vec.ini
    

Note: The input networks for EvalNE are required to be in edgelist format.

As an API

The library can be imported and used like any other Python module. Next we present a very basic example, for more complete ones we refer the user to the examples/ folder.

from evalne.evaluation import evaluator
from evalne.preprocessing import preprocess as pp

# Load and preprocess the network
G = pp.load_graph('../evalne/tests/data/network.edgelist')
G, _ = pp.prep_graph(G)

# Create an evaluator and generate train/test edge split
nee = evaluator.Evaluator()
_ = nee.traintest_split.compute_splits(G)

# Set the baselines
methods = ['random_prediction', 'common_neighbours', 'jaccard_coefficient']

# Evaluate baselines
nee.evaluate_baseline(methods=methods)

try:
    # Check if OpenNE is installed
    import openne

    # Set embedding methods from OpenNE
    methods = ['node2vec', 'deepwalk', 'GraRep']
    commands = [
        'python -m openne --method node2vec --graph-format edgelist --p 1 --q 1',
        'python -m openne --method deepWalk --graph-format edgelist --number-walks 40',
        'python -m openne --method grarep --graph-format edgelist --epochs 10']
    edge_emb = ['average', 'hadamard']

    # Evaluate embedding methods
    for i in range(len(methods)):
        command = commands[i] + " --input {} --output {} --representation-size {}"
        nee.evaluate_cmd(method_name=methods[i], method_type='ne', command=command,
                         edge_embedding_methods=edge_emb, input_delim=' ', output_delim=' ')

except ImportError:
    print("The OpenNE library is not installed. Reporting results only for the baselines...")
    pass

# Get output
results = nee.get_results()
for result in results:
    result.pretty_print()

Output

The library can provide two types of outputs, depending on the value of the SCORES option of the configuration file. If the keyword all is specified, the library will generate a file named eval_output.txt containing for each method and network analysed all the metrics available (auroc, precision, f-score, etc.). If more than one experiment repeat is requested the values reported will be the average over all the repeats. The output file will be located in the same path from which the evaluation was run.

Setting the SCORES option to %(maximize) will generate a similar output file as before. The content of this file, however, will be a table (Alg. x Networks) containing exclusively the score specified in the MAXIMIZE option for each combination of method and network averaged over all experiment repeats.

Additionally, if the option TRAINTEST_PATH contains a valid filename, EvalNE will create a file with that name under each of the OUTPATHS provided. In each of these paths the library will store the true and false train and test sets of edge.

NOTE: The tabular output is not available for mixes of directed and undirected networks.

Citation

If you have found EvaNE useful in your research, please cite our arXiv paper:

    @misc{Mara2019,
      author = {Alexandru Mara and Jefrey Lijffijt and Tijl De Bie},
      title = {EvalNE: A Framework for Evaluating Network Embeddings on Link Prediction},
      year = {2019},
      archivePrefix = {arXiv},
      eprint = {1901.09691}
    }

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evalne-0.2.2.tar.gz (36.4 kB view details)

Uploaded Source

Built Distribution

evalne-0.2.2-py2-none-any.whl (43.7 kB view details)

Uploaded Python 2

File details

Details for the file evalne-0.2.2.tar.gz.

File metadata

  • Download URL: evalne-0.2.2.tar.gz
  • Upload date:
  • Size: 36.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.4.2 requests/2.20.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.26.0 CPython/2.7.15

File hashes

Hashes for evalne-0.2.2.tar.gz
Algorithm Hash digest
SHA256 26e14679bcaa1f01b72a0d31bdc99467b8433c691c87ee8d4d4c15ae416d52f8
MD5 4f126e4a4992e402b04ac43682458946
BLAKE2b-256 306e5c88a22a56984579e14e8d824f0636dad0e45841d38a4dd336ba379e14e1

See more details on using hashes here.

File details

Details for the file evalne-0.2.2-py2-none-any.whl.

File metadata

  • Download URL: evalne-0.2.2-py2-none-any.whl
  • Upload date:
  • Size: 43.7 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.4.2 requests/2.20.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.26.0 CPython/2.7.15

File hashes

Hashes for evalne-0.2.2-py2-none-any.whl
Algorithm Hash digest
SHA256 6be0e4126d98869f787cb626c57300618d19cc64e6516f686286d73ce8b4cdf3
MD5 e787ea266aaabe388ec9ca74c4647656
BLAKE2b-256 ecc0e8d543f156d067cb893f3e57067304aa44dc26bf32e27ceb39a8667be8df

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page