Python Inferface for the Karlsruhe Hypergraph Partitioning Framework (KaHyPar)
Project description
License  Linux & macOS Build  Windows Build  Fossa  Zenodo 

Code Coverage  Code Quality  Coverity Scan  SonarCloud  Issues 

Table of Contents
 What is a Hypergraph? What is Hypergraph Partitioning?
 What is KaHyPar?
 Requirements
 Building KaHyPar
 Testing and Profiling
 Running KaHyPar
 Using the Library Interfaces
 Bug Reports
 Licensing
 Contributing
What is a Hypergraph? What is Hypergraph Partitioning?
Hypergraphs are a generalization of graphs, where each (hyper)edge (also called net) can connect more than two vertices. The kway hypergraph partitioning problem is the generalization of the wellknown graph partitioning problem: partition the vertex set into k disjoint blocks of bounded size (at most 1 + ε times the average block size), while minimizing an objective function defined on the nets.
The two most prominent objective functions are the cutnet and the connectivity (or λ − 1) metrics. Cutnet is a straightforward generalization of the edgecut objective in graph partitioning (i.e., minimizing the sum of the weights of those nets that connect more than one block). The connectivity metric additionally takes into account the actual number λ of blocks connected by a net. By summing the (λ − 1)values of all nets, one accurately models the total communication volume of parallel sparse matrixvector multiplication and once more gets a metric that reverts to edgecut for plain graphs.
What is KaHyPar?
KaHyPar is a multilevel hypergraph partitioning framework for optimizing the cut and the (λ − 1)metric. It supports both recursive bisection and direct kway partitioning. As a multilevel algorithm, it consist of three phases: In the coarsening phase, the hypergraph is coarsened to obtain a hierarchy of smaller hypergraphs. After applying an initial partitioning algorithm to the smallest hypergraph in the second phase, coarsening is undone and, at each level, a local search method is used to improve the partition induced by the coarser level. KaHyPar instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained nlevel approach combined with strong local search heuristics, it computes solutions of very high quality. Its algorithms and detailed experimental results are presented in several research publications.
Additional Features

Hypergraph partitioning with variable block weights:
KaHyPar has support for variable block weights. If command line option
useindividualpartweights=true
is used, the partitioner tries to partition the hypergraph such that each block Vx has a weight of at most Bx, where Bx can be specified for each block individually using the command line parameterpartweights= B1 B2 B3 ... Bk1
. Since the framework does not yet support perfectly balanced partitioning, upper bounds need to be slightly larger than the total weight of all vertices of the hypergraph. Note that this feature is still experimental. 
Hypergraph partitioning with fixed vertices:
Hypergraph partitioning with fixed vertices is a variation of standard hypergraph partitioning. In this problem, there is an additional constraint on the block assignment of some vertices, i.e., some vertices are preassigned to specific blocks prior to partitioning with the condition that, after partitioning the remaining “free” vertices, the fixed vertices are still in the block that they were assigned to. The command line parameter
fixed / f
can be used to specify a fix file in hMetis fix file format. For a hypergraph with V vertices, the fix file consists of V lines  one for each vertex. The ith line either contains1
to indicate that the vertex is free to move or<part id>
to indicate that this vertex should be preassigned to block<part id>
. Note that part ids start from 0.KaHyPar currently supports three different contraction policies for partitioning with fixed vertices:
free_vertex_only
allows all contractions in which the contraction partner is a free vertex, i.e., it allows contractions of vertex pairs where either both vertices are free, or one vertex is fixed and the other vertex is free.fixed_vertex_allowed
additionally allows contractions of two fixed vertices provided that both are preassigned to the same block. Based on preliminary experiments, this is currently the default policy.equivalent_vertices
only allows contractions of vertex pairs that consist of either two free vertices or two fixed vertices preassigned to the same block.

Evolutionary framework (KaHyParE):
KaHyParE enhances KaHyPar with an evolutionary framework as described in our GECCO'18 publication. Given a fairly large amount of running time, this memetic multilevel algorithm performs better than repeated executions of nonevolutionary KaHyPar configurations, hMetis, and PaToH. The command line parameter
timelimit=xxx
can be used to set the maximum running time (in seconds). Parameterpartitionevolutionary=true
enables evolutionary partitioning. 
Improve existing partitions:
KaHyPar uses direct kway Vcycles to try to improve an existing partition specified via parameter
partfile=</path/to/file>
. The maximum number of Vcycles can be controlled via parametervcycles=
.
Experimental Results
We use the performance profiles to compare KaHyPar to other partitioning algorithms in terms of solution quality. For a set of algorithms and a benchmark set containing instances, the performance ratio relates the cut computed by partitioner p for instance i to the smallest minimum cut of all algorithms, i.e.,
.
The performance profile of algorithm p is then given by the function
.
For connectivity optimization, the performance ratios are computed using the connectivity values instead of the cut values. The value of corresponds to the fraction of instances for which partitioner p computed the best solution, while is the probability that a performance ratio is within a factor of of the best possible ratio. Note that since performance profiles only allow to assess the performance of each algorithm relative to the best algorithm, the values cannot be used to rank algorithms (i.e., to determine which algorithm is the second best etc.).
In our experimental analysis, the performance profile plots are based on the best solutions (i.e., minimum connectivity/cut) each algorithm found for each instance. Furthermore, we choose parameters for all p, i, and such that a performance ratio if and only if algorithm p computed an infeasible solution for instance i, and if and only if the algorithm could not compute a solution for instance i within the given time limit. In our performance profile plots, performance ratios corresponding to infeasible solutions will be shown on the xtick on the xaxis, while instances that could not be partitioned within the time limit are shown implicitly by a line that exits the plot below . Since the performance ratios are heavily rightskewed, the performance profile plots are divided into three segments with different ranges for parameter to reflect various areas of interest. The first segment highlights small values (), while the second segment contains results for all instances that are up to a factor of worse than the best possible ratio. The last segment contains all remaining ratios, i.e., instances for which some algorithms performed considerably worse than the best algorithm, instances for which algorithms produced infeasible solutions, and instances which could not be partitioned within the given time limit.
In the figures, we compare KaHyPar with PaToH in quality (PaToHQ) and default mode (PaToHD), the kway (hMETISK) and the recursive bisection variant (hMETISR) of hMETIS 2.0 (p1), Zoltan using algebraic distancebased coarsening (ZoltanAlgD), Mondriaan v.4.2.1 and the recently published HYPE algorithm.
Solution Quality
Running Time
Additional Resources
We provide additional resources for all KaHyParrelated publications:
kKaHyParSEA20 / rKaHyParSEA20 
SEA'20  Paper  TR  Slides  TBA 

kKaHyPar / rKaHyPar 
  Dissertation    Slides  Experimental Results 
KaHyParMF / KaHyParRMF 
SEA'18 / JEA'19 
SEA Paper / JEA Paper 
TR  Slides  Experimental Results: SEA / JEA 
KaHyParE (EvoHGP)  GECCO'18  Paper  TR  Slides  Experimental Results 
KaHyParCA  SEA'17  Paper    Slides  Experimental Results 
KaHyParK  ALENEX'17  Paper    Slides  Experimental Results 
KaHyParR  ALENEX'16  Paper  TR  Slides  Experimental Results 
Projects using KaHyPar
 CoTenGra  Hyperoptimized Contraction Trees for Large Tensor Networks
 LSOracle  The Logic Synthesis Oracle
 Plasmo.jl  Platform for Scalable Modeling and Optimization
 GraphDot  A GPUaccelerated Python library for graph similarity computation
Requirements
The Karlsruhe Hypergraph Partitioning Framework requires:
 A 64bit operating system. Linux, Mac OS X and Windows are currently supported.
 A modern, ready compiler such as
g++
version 9 or higher orclang
version 11.0.3 or higher.  The cmake build system.
 The Boost.Program_options library and the boost header files.
Building KaHyPar

Clone the repository including submodules:
git clone depth=1 recursive git@github.com:SebastianSchlag/kahypar.git

Create a build directory:
mkdir build && cd build

Run cmake:
cmake .. DCMAKE_BUILD_TYPE=RELEASE

Run make:
make
Testing and Profiling
Tests are automatically executed while project is built. Additionally a test
target is provided.
Endtoend integration tests can be started with: make integration_tests
. Profiling can be enabled via cmake flag: DENABLE_PROFILE=ON
.
Running KaHyPar
The standalone program can be built via make KaHyPar
. The binary will be located at: build/kahypar/application/
.
KaHyPar has several configuration parameters. For a list of all possible parameters please run: ./KaHyPar help
.
We use the hMetis format for the input hypergraph file as well as the partition output file.
Default / Most Recent Presets
We provide two default framework configurations  one for recursive bipartitioning (rKaHyPar) and one for direct kway partitioning (kKaHyPar).
To start kKaHyPar optimizing the (connectivity  1) objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/km1_kKaHyPar_sea20.ini
To start kKaHyPar optimizing the cut net objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o cut m direct p ../../../config/cut_kKaHyPar_sea20.ini
To start rKaHyPar optimizing the (connectivity  1) objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m recursive p ../../../config/km1_rKaHyPar_sea20.ini
To start rKaHyPar optimizing the cut net objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o cut m recursive p ../../../config/cut_rKaHyPar_sea20.ini
To start the memetic algorithm kKaHyParE optimizing the (connectivity  1) objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/km1_kKaHyParE_sea20.ini
Old Presets
Additionally, we provide different presets that correspond to the configurations used in the publications at ALENEX'16, ALENEX'17, SEA'17, SEA'18, GECCO'18, as well as in our JEA journal paper and in the dissertation of Sebastian Schlag. These configurations are located in the config/old_reference_configs folder. In order to use these configurations, you have to checkout KaHyPar release 1.1.0, since some old code as been removed in the most current release.
To start KaHyParMF (using flowbased refinement) optimizing the (connectivity  1) objective using direct kway mode run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/old_reference_configs/km1_kahypar_mf_jea19.ini
To start KaHyParMF (using flowbased refinement) optimizing the cutnet objective using direct kway mode run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o cut m direct p ../../../config/old_reference_configs/cut_kahypar_mf_jea19.ini
To start EvoHGP/KaHyParE optimizing the (connectivity  1) objective using direct kway mode run
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/old_reference_configs/km1_direct_kway_gecco18.ini
Note that the configuration km1_direct_kway_gecco18.ini
is based on KaHyParCA. However, KaHyParE also works with flowbased local improvements. In our JEA publication the km1_kahypar_e_mf_jea19.ini
configuration was used.
To start KaHyParCA (using communityaware coarsening) optimizing the (connectivity  1) objective using direct kway mode run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/old_reference_configs/km1_direct_kway_sea17.ini
To start KaHyPar in direct kway mode (KaHyParK) optimizing the (connectivity  1) objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o km1 m direct p ../../../config/old_reference_configs/km1_direct_kway_alenex17.ini
To start KaHyPar in recursive bisection mode (KaHyParR) optimizing the cutnet objective run:
./KaHyPar h <pathtohgr> k <# blocks> e <imbalance (e.g. 0.03)> o cut m recursive p ../../../config/old_reference_configs/cut_rb_alenex16.ini
All preset parameters can be overwritten by using the corresponding command line options.
Using the Library Interfaces
The CStyle Interface
We provide a simple Cstyle interface to use KaHyPar as a library. The library can be built and installed via
make install.library
and can be used like this:
#include <memory> #include <vector> #include <iostream> #include <libkahypar.h> int main(int argc, char* argv[]) { kahypar_context_t* context = kahypar_context_new(); kahypar_configure_context_from_file(context, "/path/to/config.ini"); const kahypar_hypernode_id_t num_vertices = 7; const kahypar_hyperedge_id_t num_hyperedges = 4; std::unique_ptr<kahypar_hyperedge_weight_t[]> hyperedge_weights = std::make_unique<kahypar_hyperedge_weight_t[]>(4); // force the cut to contain hyperedge 0 and 2 hyperedge_weights[0] = 1; hyperedge_weights[1] = 1000; hyperedge_weights[2] = 1; hyperedge_weights[3] = 1000; std::unique_ptr<size_t[]> hyperedge_indices = std::make_unique<size_t[]>(5); hyperedge_indices[0] = 0; hyperedge_indices[1] = 2; hyperedge_indices[2] = 6; hyperedge_indices[3] = 9; hyperedge_indices[4] = 12; std::unique_ptr<kahypar_hyperedge_id_t[]> hyperedges = std::make_unique<kahypar_hyperedge_id_t[]>(12); // hypergraph from hMetis manual page 14 hyperedges[0] = 0; hyperedges[1] = 2; hyperedges[2] = 0; hyperedges[3] = 1; hyperedges[4] = 3; hyperedges[5] = 4; hyperedges[6] = 3; hyperedges[7] = 4; hyperedges[8] = 6; hyperedges[9] = 2; hyperedges[10] = 5; hyperedges[11] = 6; const double imbalance = 0.03; const kahypar_partition_id_t k = 2; kahypar_hyperedge_weight_t objective = 0; std::vector<kahypar_partition_id_t> partition(num_vertices, 1); kahypar_partition(num_vertices, num_hyperedges, imbalance, k, /*vertex_weights */ nullptr, hyperedge_weights.get(), hyperedge_indices.get(), hyperedges.get(), &objective, context, partition.data()); for(int i = 0; i != num_vertices; ++i) { std::cout << i << ":" << partition[i] << std::endl; } kahypar_context_free(context); }
To compile the program using g++
run:
g++ std=c++14 DNDEBUG O3 I/usr/local/include L/usr/local/lib lkahypar L/path/to/boost/lib I/path/to/boost/include lboost_program_options program.cc o program
To remove the library from your system use the provided uninstall target:
make uninstallkahypar
The Python Interface
To compile the Python interface, do the following:
 Create a build directory:
mkdir build && cd build
 Run cmake:
cmake .. DCMAKE_BUILD_TYPE=RELEASE
 Go to libary folder:
cd python
 Compile the libarary:
make
 Copy the libary to your sitepackages directory:
cp kahypar.so <pathtositepackages>
After that you can use the KaHyPar libary like this:
import os import kahypar as kahypar num_nodes = 7 num_nets = 4 hyperedge_indices = [0,2,6,9,12] hyperedges = [0,2,0,1,3,4,3,4,6,2,5,6] node_weights = [1,2,3,4,5,6,7] edge_weights = [11,22,33,44] k=2 hypergraph = kahypar.Hypergraph(num_nodes, num_nets, hyperedge_indices, hyperedges, k, edge_weights, node_weights) context = kahypar.Context() context.loadINIconfiguration("<path/to/config>/km1_kKaHyPar_sea20.ini") context.setK(k) context.setEpsilon(0.03) kahypar.partition(hypergraph, context)
For more information about the python library functionality, please see: module.cpp
We also provide a precompiled version as a , which can be installed via:
python3 m pip install indexurl https://pypi.org/simple/ nodeps kahypar
The Julia Interface
Thanks to Jordan Jalving (@jalving) KaHyPar now also offers a Julia interface, which can currently be found here: kahypar/KaHyPar.jl.
The corresponding dependency can be installed via:
using Pkg Pkg.add(PackageSpec(url="https://github.com/jalving/KaHyPar.jl.git")) Pkg.test("KaHyPar")
After that, you can use KaHyPar to partition your hypergraphs like this:
using KaHyPar using SparseArrays I = [1,3,1,2,4,5,4,5,7,3,6,7] J = [1,1,2,2,2,2,3,3,3,4,4,4] V = Int.(ones(length(I))) A = sparse(I,J,V) h = KaHyPar.hypergraph(A) KaHyPar.partition(h,2,configuration = :edge_cut) KaHyPar.partition(h,2,configuration = :connectivity) KaHyPar.partition(h,2,configuration = joinpath(@__DIR__,"../src/config/km1_kKaHyPar_sea20.ini"))
The Java Interface
Romain Wallon has created a Java interface for KaHyPar. Please refer to the readme for a detailed description on how to build and use the interface.
Bug Reports
We encourage you to report any problems with KaHyPar via the github issue tracking system of the project.
Licensing
KaHyPar is free software provided under the GNU General Public License (GPLv3). For more information see the COPYING file. We distribute this framework freely to foster the use and development of hypergraph partitioning tools. If you use KaHyPar in an academic setting please cite the appropriate papers. If you are interested in a commercial license, please contact me.
// Overall KaHyPar framework
@phdthesis{DBLP:phd/dnb/Schlag20,
author = {Sebastian Schlag},
title = {HighQuality Hypergraph Partitioning},
school = {Karlsruhe Institute of Technology, Germany},
year = {2020}
}
// KaHyParR
@inproceedings{shhmss2016alenex,
author = {Sebastian Schlag and
Vitali Henne and
Tobias Heuer and
Henning Meyerhenke and
Peter Sanders and
Christian Schulz},
title = {kway Hypergraph Partitioning via \emph{n}Level Recursive
Bisection},
booktitle = {18th Workshop on Algorithm Engineering and Experiments, (ALENEX 2016)},
pages = {5367},
year = {2016},
}
// KaHyParK
@inproceedings{ahss2017alenex,
author = {Yaroslav Akhremtsev and
Tobias Heuer and
Peter Sanders and
Sebastian Schlag},
title = {Engineering a direct \emph{k}way Hypergraph Partitioning Algorithm},
booktitle = {19th Workshop on Algorithm Engineering and Experiments, (ALENEX 2017)},
pages = {2842},
year = {2017},
}
// KaHyParCA
@inproceedings{hs2017sea,
author = {Tobias Heuer and
Sebastian Schlag},
title = {Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure},
booktitle = {16th International Symposium on Experimental Algorithms, (SEA 2017)},
pages = {21:121:19},
year = {2017},
}
// KaHyParMF
@inproceedings{heuer_et_al:LIPIcs:2018:8936,
author ={Tobias Heuer and Peter Sanders and Sebastian Schlag},
title ={{Network FlowBased Refinement for Multilevel Hypergraph Partitioning}},
booktitle ={17th International Symposium on Experimental Algorithms (SEA 2018)},
pages ={1:11:19},
year ={2018}
}
@article{KaHyParMFJEA,
author = {Heuer, T. and Sanders, P. and Schlag, S.},
title = {Network FlowBased Refinement for Multilevel Hypergraph Partitioning},
journal = {ACM Journal of Experimental Algorithmics (JEA)}},
volume = {24},
number = {1},
month = {09},
year = {2019},
pages = {2.3:12.3:36},
publisher = {ACM}
}
// KaHyParE (EvoHGP)
@inproceedings{Andre:2018:MMH:3205455.3205475,
author = {Robin Andre and Sebastian Schlag and Christian Schulz},
title = {Memetic Multilevel Hypergraph Partitioning},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
series = {GECCO '18},
year = {2018},
pages = {347354},
numpages = {8}
}
// KaHyParSEA20 (KaHyParHFC)
@InProceedings{gottesbren_et_al:LIPIcs:2020:12085,
author = {Lars Gottesb{\"u}ren and Michael Hamann and Sebastian Schlag and Dorothea Wagner},
title = {{Advanced FlowBased Multilevel Hypergraph Partitioning}},
booktitle = {18th International Symposium on Experimental Algorithms (SEA)},
pages = {11:111:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
year = {2020}
}
Contributing
If you are interested in contributing to the KaHyPar framework feel free to contact me or create an issue on the issue tracking system.
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Built Distributions
Hashes for kahypar1.1.6cp39cp39manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  5c8b2e6e7e671d8bdbf65eeb701147557b67af2c11be3cdf16bcb2374ae28144 

MD5  8b6abb7a3855036c7a90c08a358c35e2 

BLAKE2256  6778421ce948f046b6aa09549948a15b39827ee215e9f09604da025b6f2acaf7 
Hashes for kahypar1.1.6cp38cp38manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  95be0b40ab34fc5215ed3006b4a891e4e3163c3695af5d274af40c4c0b62b924 

MD5  e0892dfccddc4dc86880baf8ecad4523 

BLAKE2256  df53dccd22e1f5bfc259c86d73cc9894e8f1d9f84efc262b80cb185a71f9a758 
Hashes for kahypar1.1.6cp37cp37mmanylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  c2394dc6a17b5a69ae71bbabe3b3529ec1284092d02119466889b121585f3fca 

MD5  7cf4a2f18eb61d66da57ef7605218cca 

BLAKE2256  d2c64b6ff807f2136079e9bc7289a793544541de4a04b402c678524647d6b0cb 
Hashes for kahypar1.1.6cp36cp36mmanylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  4506aa7406f384c72e5b53689afb51f4af2426b14b5000fcadd66d24e1047345 

MD5  4458e973064dc7991afe1fc5e6afdb30 

BLAKE2256  1622340f12f4e82847c21cf0fc57f8a72b24853564b78ff8bb657883d52ac93b 
Hashes for kahypar1.1.6cp35cp35mmanylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  ad4b11616b313d88d3d259dfa27e2e601fb20efecd74dc6388300341d3f8909d 

MD5  cf9321b9c0885089a5411e9acd9c6337 

BLAKE2256  77b29bd6c29a70edcbc50c0a5e663f9cb7b2dfc445ca349a9ab2176f05131844 