Skip to main content

Network Embedding Experimentation Toolkit - A powerful framework for graph analysis, embedding computation, and machine learning on graph-structured data

Project description

NEExT: Network Embedding Experimentation Toolkit

NEExT is a powerful Python framework for graph analysis, embedding computation, and machine learning on graph-structured data. It provides a unified interface for working with different graph backends (NetworkX and iGraph), computing node features, generating graph embeddings, and training machine learning models.

📚 Documentation

Detailed documentation is available in the docs directory. Build it locally or visit the online documentation at NEExT Documentation.

🌟 Features

  • Flexible Graph Handling

    • Support for both NetworkX and iGraph backends
    • Automatic graph reindexing and largest component filtering
    • Node sampling capabilities for large graphs
    • Rich attribute support for nodes and edges
  • Comprehensive Node Features

    • PageRank
    • Degree Centrality
    • Closeness Centrality
    • Betweenness Centrality
    • Eigenvector Centrality
    • Clustering Coefficient
    • Local Efficiency
    • LSME (Local Structural Motif Embeddings)
  • Graph Embeddings

    • Approximate Wasserstein
    • Exact Wasserstein
    • Sinkhorn Vectorizer
    • Customizable embedding dimensions
  • Machine Learning Integration

    • Classification and regression support
    • Dataset balancing options
    • Cross-validation with customizable splits
    • Feature importance analysis

Custom Node Feature Functions

NEExT allows you to define and compute your own custom node feature functions alongside the built-in ones. This provides great flexibility for experimenting with novel graph metrics.

Defining a Custom Feature Function:

Your custom feature function must adhere to the following structure:

  1. Input: It must accept a single argument, which will be a graph object. This object provides access to the graph's structure (nodes, edges) and properties (e.g., graph.nodes, graph.graph_id, graph.G which is the underlying NetworkX or iGraph object).
  2. Output: It must return a pandas.DataFrame with the following specific columns in order:
    • "node_id": Identifiers for the nodes for which features are computed.
    • "graph_id": The identifier of the graph to which these nodes belong.
    • One or more feature columns: These columns should contain the computed feature values. The naming convention for these columns should ideally follow the pattern your_feature_name_0, your_feature_name_1, etc., if your feature has multiple components or is expanded over hops (though a single feature column like your_feature_name is also acceptable).

Example:

Here's how you can define a simple custom feature function and use it:

import networkx as nx
import pandas as pd

# 1. Define your custom feature function
# Works from scripts, modules, and Jupyter notebook cells — the function is
# shipped to workers via cloudpickle, so top-level notebook definitions are fine.
# Avoid closing over unpicklable objects (open file handles, live DB connections, etc.).
def my_node_degree_squared(graph):
    nodes = list(graph.nodes) # or range(graph.G.vcount()) for igraph if nodes are 0-indexed
    graph_id = graph.graph_id
    
    if hasattr(graph.G, 'degree'): # Handles both NetworkX and iGraph
        if isinstance(graph.G, nx.Graph): # NetworkX
            degrees = [graph.G.degree(n) for n in nodes]
        else: # iGraph
            degrees = graph.G.degree(nodes)
    else:
        raise TypeError("Graph object does not have a degree method.")
        
    degree_squared_values = [d**2 for d in degrees]
    
    df = pd.DataFrame({
        'node_id': nodes,
        'graph_id': graph_id,
        'degree_sq_0': degree_squared_values
    })
    # Ensure the correct column order
    return df[['node_id', 'graph_id', 'degree_sq_0']]

# 2. Prepare the list of custom feature methods
my_feature_methods = [
    {"feature_name": "my_degree_squared", "feature_function": my_node_degree_squared}
]

# 3. Pass it to compute_node_features
# Initialize NEExT and load your graph_collection as shown in the Quick Start
# nxt = NEExT()
# graph_collection = nxt.read_from_csv(...)

features = nxt.compute_node_features(
    graph_collection=graph_collection,
    feature_list=["page_rank", "my_degree_squared"], # Include your custom feature name
    feature_vector_length=3, # Applies to built-in features that use it
    my_feature_methods=my_feature_methods
)

print(features.features_df.head())

When you include "my_degree_squared" in the feature_list and provide my_feature_methods, NEExT will automatically register and compute your custom function. If "all" is in feature_list, your custom registered function will also be included in the computation.

Parallel execution controls:

By default, compute_node_features() uses n_jobs=-1 and parallel_backend="loky" for parallel execution. The loky process backend is the safer default for custom functions defined in Jupyter notebooks because joblib can serialize them with cloudpickle, but it may spend substantial time serializing large graph objects and feature functions.

For serialization-heavy workloads, try parallel_backend="threading". Threads avoid sending graph objects to worker processes, which can be faster when pickling dominates runtime, but GIL-bound Python code may still scale poorly. Benchmark expensive custom features with n_jobs=1, 2, 4, and -1 before choosing production settings.

Advanced joblib options can be passed through with joblib_kwargs, for example joblib_kwargs={"idle_worker_timeout": 120} for the loky backend or joblib_kwargs={"timeout": 300}. These are tuning controls for scheduling and worker behavior, not guaranteed fixes for memory pressure.

📦 Installation

Basic Installation

pip install NEExT

Development Installation

# Clone the repository
git clone https://github.com/ashdehghan/NEExT.git
cd NEExT

# Install with development dependencies
pip install -e ".[dev]"

Additional Components

# For running tests
pip install -e ".[test]"

# For building documentation
pip install -e ".[docs]"

# For running experiments
pip install -e ".[experiments]"

# Install all components
pip install -e ".[dev,test,docs,experiments]"

🚀 Quick Start

Basic Usage

from NEExT import NEExT

# Initialize the framework
nxt = NEExT()
nxt.set_log_level("INFO")

# Load graph data
graph_collection = nxt.read_from_csv(
    edges_path="edges.csv",
    node_graph_mapping_path="node_graph_mapping.csv",
    graph_label_path="graph_labels.csv",
    reindex_nodes=True,
    filter_largest_component=True,
    graph_type="igraph"
)

# Compute node features
features = nxt.compute_node_features(
    graph_collection=graph_collection,
    feature_list=["all"],
    feature_vector_length=3
)

# Compute graph embeddings
embeddings = nxt.compute_graph_embeddings(
    graph_collection=graph_collection,
    features=features,
    embedding_algorithm="approx_wasserstein",
    embedding_dimension=3
)

# Train a classifier
model_results = nxt.train_ml_model(
    graph_collection=graph_collection,
    embeddings=embeddings,
    model_type="classifier",
    sample_size=50
)

Working with Large Graphs

NEExT supports node sampling for handling large graphs:

# Load graphs with 70% of nodes
graph_collection = nxt.read_from_csv(
    edges_path="edges.csv",
    node_graph_mapping_path="node_graph_mapping.csv",
    node_sample_rate=0.7  # Use 70% of nodes
)

Feature Importance Analysis

# Compute feature importance
importance_df = nxt.compute_feature_importance(
    graph_collection=graph_collection,
    features=features,
    feature_importance_algorithm="supervised_fast",
    embedding_algorithm="approx_wasserstein"
)

📊 Experiments

NEExT includes several pre-built experiments in the examples/experiments directory:

Node Sampling Experiment

Investigates the effect of node sampling on classifier accuracy:

cd examples/experiments
python node_sampling_experiments.py

📝 Input File Formats

edges.csv

src_node_id,dest_node_id
0,1
1,2
...

node_graph_mapping.csv

node_id,graph_id
0,1
1,1
2,2
...

graph_labels.csv

graph_id,graph_label
1,0
2,1
...

🛠️ Development

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=NEExT

# Run specific test file
pytest tests/test_node_sampling.py

Building Documentation

cd docs
make html

Code Style

The project uses several tools for code quality:

# Format code
black .

# Sort imports
isort .

# Check style
flake8 .

# Type checking
mypy .

Publishing to PyPI

NEExT uses direct local publication through the root Makefile. GitHub Releases do not publish the package.

Before publishing:

  1. Update __version__ in NEExT/__init__.py. pyproject.toml and the docs derive the package version from that file.
  2. Commit all release changes on main.
  3. Create .env with PYPI_API_TOKEN=pypi-....
  4. Run the validation and publish flow:
make release-check
make deploy

make deploy verifies the working tree and token, builds with uv build, pushes main, creates and pushes the release_v_<version> tag, and publishes with uv publish.

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run tests
  5. Submit a pull request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

👥 Authors

🙏 Acknowledgments

  • NetworkX team for the graph algorithms
  • iGraph team for the efficient graph operations
  • Scikit-learn team for machine learning components

📧 Contact

For questions and support:

🔄 Version History

  • 0.1.0
    • Initial release
    • Basic graph operations
    • Node feature computation
    • Graph embeddings
    • Machine learning integration

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neext-0.2.12.tar.gz (92.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neext-0.2.12-py3-none-any.whl (85.9 kB view details)

Uploaded Python 3

File details

Details for the file neext-0.2.12.tar.gz.

File metadata

  • Download URL: neext-0.2.12.tar.gz
  • Upload date:
  • Size: 92.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.11 {"installer":{"name":"uv","version":"0.11.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.3","id":"zena","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for neext-0.2.12.tar.gz
Algorithm Hash digest
SHA256 42f352e679fe73287046a288a5da9b1483fff878b68b47d88ed4b7f92df93679
MD5 aea06abe396206ecec99a9f3470bf6c0
BLAKE2b-256 a80b71774e5e1b2341df825948d7c58435318ab879be917d8c2dce3815292b19

See more details on using hashes here.

File details

Details for the file neext-0.2.12-py3-none-any.whl.

File metadata

  • Download URL: neext-0.2.12-py3-none-any.whl
  • Upload date:
  • Size: 85.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.11 {"installer":{"name":"uv","version":"0.11.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.3","id":"zena","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for neext-0.2.12-py3-none-any.whl
Algorithm Hash digest
SHA256 560f1b501946d1ec067e622404cc72904e2f5c335965070d70a09022b825939d
MD5 0e6704792d0aa30f58b58c1949cd366d
BLAKE2b-256 d766357d19e120de51b6df388e40301cfa0bbf20cd2e50337d70d28dac0134a5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page