It is a Python library that contains useful algorithms for several complex problems such as partitioning, floor planning, scheduling.
Project description
Optimizationalgorithms
Optimizationalgorithms is a Python library that contains useful algorithms for several complex problems such as partitioning, floor planning, scheduling. This library will provide many implementations for many optimization algorithms. This library is organized in a problemwise structure. For example, there are many problems such as graph partitioning problem, scheduling problem, etc. For each problem, there are many algorithms to solve it. For each algorithm, there are many possible approaches to implement it.
Q1: what is the goal of this library?
The goal is to provide a python library:
 for the purpose of optimization algorithms
 with respect to consistency and simplicity
 from the viewpoint of researchers, students, and industries
 in the context of VLSI circuit design, parallel computing, cloud computing, and artificial intelligent.
Q2: what are the motivation of building this library?
During my research in partitioning problem, I found that there is a lack of library where the researchers can easily compare their algorithms with others. I found that each implementation has its own input style and output style which means more afford is spent for understanding the codes and adapting the input. We suggest this library where the optimization algorithms can be packaged with consistency in mind.
Q3: who are the users?
 Students
 Researchers
 Industries
Q4: what are the features of this library?
 Consistency: we are striving to ensure consistency.
 Simplicity: we are striving to ensure simplicity.
 Source for learning, and teaching: we are striving to make this library an essential source for learning and teaching optimization algorithms.
 Useful resource for researchers, students, and industries: we are striving to make this library an essential resource for researchers, students, and industries.
Q5: what we mean by ensuring the consistency?
Consistency is the main feature of this library. We will strive to be consistent in project structure, coding, documentation, videos, and datasets. The following provide a basic consistency points of this library:

Consistency in the project structure
 Consistency in guidelines, styles, and templates.
 Consistency in creating new modules and packages.
 Consistency in the file names, folders names, folders hierarchies, configuration files, files arrangement and distributions.
 Consistency in dates and times of updating and reviewing.
 Consistency in working teams and reviews.
 Consistency in evaluation tests and validation.
 Consistency in reviews methodology.

Consistency in the code
 Consistency within one module or function
 Consistency in return statements.
 Consistency in input, and output.
 Consistency in print and summary.
 Consistency in signatures of objects, methods, or data attributes.
 Consistency in code style. The style of coding should be consistent and following special best practices guides such as spaces, comments, etc.
 Consistency in the data structure.

Consistency in the documentation
 Consistency in describing problems, algorithms, and implementations.
 Consistency in describing classes, methods, attributes.
 Consistency in titles.
 Consistency in adding new sections and chapters.
 Consistency in naming conventions. The naming conventions should follow a certain naming standard.
 Consistency in writing many special names such as the names of operating systems, programming languages, standards bodies, etc.
 Consistency in writing abbreviations.
 Consistency in references.
 Consistency in code documentation.
 Consistency in describing the datasets.

Consistency in the videos
 Video length should follow ultimate guide to ensure consistency.
 The methodology of describing problems, algorithms, implementation should ensure consistency.
 All videos should be hosted in one place such as one YouTube channel.
 The videos should be categorized properly with consistency in mind.
 Consistency in ordering and naming
 Consistency with the documentation

Consistency in the datasets
 The structure of the dataset should follow a certain formatting and structure.
 The test datasets should be consistent to validate and ensure the quality of the implemented algorithms.
Q6: what are the metrics that will be used to measure success?
 Number of installations
 Number of problems that are considered.
 Number of algorithms that are considered.
 Number of implementations
 Number of Awards
 Number of citations
 Number of visitors/readers
 Number of developers/contributors
 Number of reviewers
 Number of versions
 Number of issues
 Number of comments
 Number of research groups that use or support OAL
 Number of industries that use or support OAL
 Number of students
Q7: what is the structure of this library?
OptimizationAlgorithms
Partitioning Problem
KernighanLin (KL)
Simulated Annealing (SA)
Hybridization SA+KL
Others not Yet implemented
Floor Planning Problem
Algorithms not Yet implemented
Schedualing Problem
Algorithms not Yet implemented
Rooting Problem
Algorithms not Yet implemented
Other problems not Yet defined
Q8: what is our future plan?
Date  Version  Main Deliverable Features 

26th September, 2021  0.1.0  (1) Formulating at least three problems (2) Implementing at least ten optimizationalgorithms (3) Stablishing official website for this project (4) Improving the consistency 
26th September, 2022  0.2.0  (1) Formulating at least three new problems (2) Implementing at least ten optimizationalgorithms (3) Publishing at least one paper (4) By this time, the author expects that we receive at least three citations. (5) By this time, the author expects that the library will be a good resource in industries. (6) By this time, the author expect that the library tutorial will be a recommended source for learning optimization algorithm. (7) By this time, the author expect that the library receives a number of contributions from researchers. 
26th September, 2023  0.3.0  (1) Formulating at least three new problems (2) Implementing at least ten optimizationalgorithms. (3) Publishing at least one new paper. (4) By this time, the author expects that the library will be cited by at least new thirty citations. (5) By this time, the author expects that the number of industries that used the library is increased by at least half. (6) By this time, the author expect that the library tutorial will be a recommended source for learning optimization algorithm. (7) By this time, the author expect that the library receives a number of contributions from researchers. 
26th September, 2024  0.4.0  (1) By this time, the author expects that the library becomes essential library for many industries. (2) By this time, the author expects that the library becomes essential library for many researchers. (3) By this time, the author expects that the library becomes wellknown library in the academic environment. 
Q9: what is the development methodology?
The author adapts the GitHub methodology for contributing to this library. The proposed methodology has two steps. The first step is formulating the problem. This library is problemwise library where the problem should be formulated into inputs and outputs with consistency in mind. The second step is contributing to this problem by implementing optimization algorithms to the given problem. The implementation can be done in parallel by many developers as shown in the following Figure.
Q10: how volunteers, students, developers, researchers can contribute?
The contribution is easy. The project is hosted in the GitHub. Therefore, any contributor can create new branch and add or update the project. Then, follow the GitHub approach where pull requests and issues can be generated. After, the consistency is ensured, the updates will be merged to the main project for next version.
Q11: what are the current version services of this library?
In the current version, I focused on one problem which is partitioning. For this problem, we implemented three algorithms as described in the following Table.
Algorithm  Description 

kernighan_lin_0101(G, initial_partitions, attrs) 
KernighanLin algorithm is wellknown bipartitioning heuristic. This implementation is optimized where many overheads are eliminated. The time complexity of this implementation is O(n^2) . This algorithm can have only one parameter which is G (undirected, unweighted, or Edgesweighted Graph) and produce balanced partitions. The initial_partitions parameter is optional. 
simulated_annealing_0102(G, initial_partitions, attrs) 
Simulated annealing algorithm is wellknown meta heuristic algorithm. This implementation is developed for graph bipartitioning problem. This algorithm can have three parameters which are G (undirected, unweighted, or Edgesweighted Graph), and two algorithmspecific attributes (cooling_rate and temperature ). The default values for cooling_rate is (0.0001 ) and for temperature is (10000 ). The initial_partitions parameter is optional. This algorithm produces balanced partitions. 
hybridization_SA_KL_0103(G, initial_partitions, attrs) 
This Hybridization technique can help adapting the advantages of the two algorithms (simulated annealing and KernighanLin algorithms). The proposed hybridization can optimize both running time and costcut quality. This algorithm can have three parameters which are G (undirected, unweighted, or Edgesweighted Graph), and two algorithmspecific attributes (cooling_rate and temperature ). The default values for cooling_rate is (0.001 ) and for temperature is (100 ). The initial_partitions parameter is optional. This algorithm produces balanced partitions. 
Returns  Descriptions 

best costcut 
The total weights of crosspartition edges are called costcut. The goal is to minimize the number of cross partition edges. This output shows the minimum costcut achieved by the current run of the given algorithm. 
bestpartitions 
It returns the partitions that have minimum cost cut (best cost cut). It will return iteratable object that contains the nodes labels of each partition. 
Common functions  Descriptions 

get_initial_partitions() 
This will return iteratable object contains initial partitions. 
get_best_partitions() 
This will return iteratable object contains best partitions. 
get_initial_cost_cut() 
This will return a value that shows the costcut of the initial partitions. 
get_bestt_cost_cut() 
This will return a value that shows the costcut of the best partitions. 
Installation
Use the package manager pip to install optimizationalgorithms.
pip install optimizationalgorithms
Usage
In this library, we used networkx to create a graph where the graph is used as input in many problems such as partitioning. Therefore, the first step is to create a graph. One possible approach of creating undirected graph is as follows:
import networkx as nx
def create_graph():
edgelist = [(5, 6), (5, 9), (6, 7), (6, 10), (7, 8), (7, 11),
(8, 12),(1, 2), (1, 5), (2, 3), (2, 6), (3, 4),
(3, 7), (4, 8), (9, 10), (9, 13), (10, 11),
(10, 14), (11, 12), (11, 15), (13, 14), (14, 15),
(15, 16),(12, 16)]
G = nx.Graph(edgelist)
return G
To show how the optimizationalgorithms library can be used for partitioning problem, we need a dataset. In this case, we will use twodimensional grid graph to show how this library can be used for portioning problem. In the grid graph, each node connected to its four nearest neighbors. To generate the grid graph, we implement the following function:
import networkx as nx
def generate_grid_2d_graph_dataset(grid_size):
cols=[(i * grid_size + j, i * grid_size + j+1) for i in range(grid_size) for j in range(grid_size1)]
rows=[(j * grid_size + i, j * grid_size + i + grid_size) for i in range(grid_size) for j in range(grid_size1)]
edgeList=cols+rows
return nx.Graph(edgeList)
The following code shows how 4X4 grid graph can be generated.
G=generate_grid_2d_graph_dataset(4)
nx.draw(G, with_labels=True)
**OUTPUT**
The following code shows how 10X10 grid graph can be generated.
G=generate_grid_2d_graph_dataset(10)
nx.draw(G, with_labels=True)
**OUTPUT**
In the first step, we will import the optimizationalgorithms library for partitioning problem where three algorithms for bipartitioning will be imported. The following code show how these algorithms can be imported.
from optimization_algorithms.for_partitioning import kernighan_lin_0101, simulated_annealing_0102, Hybridization_SA_KL_0103
The following code shows how 100X100 grid graph can be partitioning using KernighanLin (KL) with the default parameters.
G=generate_grid_2d_graph_dataset(100)
KL=kernighan_lin_0101(G)
**OUTPUT**
Result Summary
initial costcut : 9841
Best costcut : 130
Running Time in Second : 76.8
The best partitions can be obtained by the following function:
partitions=KL.get_best_partitions()
The following code shows how 100X100 grid graph can be partitioning using Simulated Annealing algorithm (SA) with the default parameters.
SA=simulated_annealing_0102(G)
**OUTPUT**
Result Summary
initial costcut : 9837
Best costcut : 4454
Running Time in Second : 3.1
The best partitions can be obtained by the following function:
partitions=SA.get_best_partitions()
The following code shows how 100X100 grid graph can be partitioning using Hybridization SA&KL algorithm with the default parameters.
SA_KL = hybridization_SA_KL_0103(G)
**OUTPUT**
Result Summary
initial costcut : 9902
Best costcut : 119
Running Time in Second : 35.5
The best partitions can be obtained by the following function:
partitions= SA_KL.get_best_partitions()
To create initial partitions and pass it as parameter, in this example, we may create the initial partitions as even partition and odd partition where the nodes with even labels will be in one partition, and the nodes with odd labels will be in another partition. The following function can help generate the initial partition.
def create_initial_partitions(G):
even=[i for i in G.nodes if i%2 == 0]
odd =[i for i in G.nodes if i%2 == 1]
return [even, odd]
The following code is used to prepare the initial_partitons parameters.
initial_partitions=create_initial_partitions(G)
Now, we can pass this parameter to the three algorithms.
KL=kernighan_lin_0101(G, initial_partitions)
**OUTPUT**
Result Summary
initial costcut : 9900
Best costcut : 111
Running Time in Second : 56.4
SA=simulated_annealing_0102(G, initial_partitions)
**OUTPUT**
Result Summary
initial costcut : 9900
Best costcut : 111
Running Time in Second : 56.4
SA_KL = hybridization_SA_KL_0103(G, initial_partitions)
**OUTPUT**
Result Summary
initial costcut : 9900
Best costcut : 111
Running Time in Second : 56.4
The following code shows how to play with algorithmspecific parameters such as coolingrate and temperature parameters of simulated annealing algorithm and hybridization SAKL algorithm.
Attrs={'cooling_rate':0.001, 'temperature':1000}
SA=simulated_annealing_0102(G, initial_partitions, Attrs)
**OUTPUT**
Result Summary
initial costcut : 9900
Best costcut : 111
Running Time in Second : 56.4
SA_KL = hybridization_SA_KL_0103(G, initial_partitions, Attrs)
**OUTPUT**
Result Summary
initial costcut : 9900
Best costcut : 111
Running Time in Second : 56.4
There are some Tips in using parameters. Simulate annealing and hybridization are different algorithm. It is better to examine each one with different values of the cooling rate and temperature parameters. The hybridization algorithm requires only small values. For KernighanLin algorithm, there are some useful functions that allow user to obtain some useful information. The following shows those functions.
KL.get_number_of_iterations()
**OUTPUT**
19
We can notice that the number of iterations by KL for the given dataset is 19.
KL.get_cost_per_iterations()
**OUTPUT**
[9900, 3154, 2900, 2836, 2436, 2100, 1900, 1700, 1500, 1300, 1100, 1002, 700, 602, 237, 154, 135, 111, 111]
License
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for optimizationalgorithms0.0.1.tar.gz
Algorithm  Hash digest  

SHA256  de4eebd1b2d635d336505184dfdba6d1270bd1c48b7093adeb168661e6056524 

MD5  9d6841c423ebe9d06f7e0b558039c014 

BLAKE2b256  98ca12933aa80196b9e9b3534f3f2c33f3bad156aa51124386e92f394c104414 
Hashes for optimization_algorithms0.0.1py3noneany.whl
Algorithm  Hash digest  

SHA256  e1d0e99c12d7d730770ec8955bb56d7612b57f6181b1768a1f032a9ef764f6cc 

MD5  8bb7e16329e7017555232a621eea0c91 

BLAKE2b256  7ac4fd6ede40e3dc6332313fcc1ffa9f22674851f941b17037aa1a9255121cc0 