Ramsey Number Explorer
Project description
Ramsey Theory RL
About
Ramsey Theory RL is a project that seeks to use simple RL and a basic graph invariant to pseudointelligently search for counterexamples for unknown Ramsey number bounds. The main aims of the project are to either improve the bounds or to assist in finding more isomorphic graphs for known numbers to help future efforts.
Our algorithm is an informed Best First Search. At each iteration, all neighbors (1 changed edge away) to the current graph are numerically represented by their invariant. The invariant is counts of all 11 possible isomorphic 4graphs inside of them. The neighbors with invariants not yet expanded are then passed through a heuristic which determines which graph to expand next. Past literature has used the count of 4paths as a heuristic. We use a DNN with 1 hidden layer that we iteratively train on seen graphs, to encourage straying away from known areas.
The algorithm can pretrain on known graphs and can start from a random graphs, known counterexamples from a smaller $n$, and known counterexamples from the same $n$. Logging is done through neptune.ai.
Our algorithm has a runtime of O($n^2 * n^{\min({4,s,t})}$) per step. As such, values like R(3,10) are less practical to explore, and so our current focus is on R(4,6) and R(5,5).
Getting Started

Sign up with Neptune AI

Create a project called RamseyRL with a model called RAMHEUR

Get your Neptune API token and name

Run the following in a colab notebook
Installing Packages
pip install RamseyRLReproducibility pip install numpy==1.23 keras==2.12.0 matplotlib==3.7.1 neptune==1.3.1 networkx==3.1 pandas==1.5.3 python_igraph==0.10.5 scikit_learn==1.2.2 seaborn==0.12.2 tensorflow==2.12.0 tqdm==4.65.0 neptune_tensorflow_keras==2.1.1 quiet
Setting Environment Variables
import os # Change these for your real token and username os.environ['NEPTUNE_API_TOKEN'] = 's0me!l0nGn3pTunEt0k3N=' os.environ['NEPTUNE_NAME'] = 'yourname'
Setting Parameters and Path
from mytest import NeptuneRunner import tensorflow as tf def project_fetcher(): return f"{os.environ.get('NEPTUNE_NAME')}/RamseyRL" runner = NeptuneRunner(n=36, s=4, t=6, project=project_fetcher()) new_params = runner.PARAMS changed_vals = {'iter_batch':20, 'iter_batches':50, 'starting_graph':"FROM_CURRENT", 'starting_graph_path':'/data/found_counters/r4_5_24_isograph.g6', 'starting_edges': True, 'load_model': False, 'profiler': True, 'alpha': 0, 'model': None, 'training_epochs': 5, 'epochs': 1, 'batch_size': 32, 'optimizer': 'adam', } new_params.update(changed_vals) runner.update_params(new_params)
Running
runner.run()
(Optional) Running for all Prior Graphs
# Getting max index of starting graph # Only to be used when PARAMS['starting_graph] in ["FROM_PRIOR", "FROM_CURRENT"] import sys import networkx as nx def get_max_starting_index(runner): counters = nx.read_graph6(sys.path[1] + runner.PARAMS['starting_graph_path']) counters = [counters] if type(counters) != list else counters return len(counters) for i in range(get_max_starting_index(runner)): runner.PARAMS['starting_graph_index'] = i runner.run()
Full Parameter Description
 heurstic_type: Choose from RANDOM, 4PATH, DNN, SCALED_DNN
 iter_batch: Steps to take before updating model data / weights
 iter_batches: None if no stopping value, else num. of iter_batches
 starting_graph: Choose from RANDOM, FROM_PRIOR, FROM_CURRENT, EMPTY
 starting_graph_path: Path to starting graph
 starting_graph_index: Index of starting graph within starting graph index list
 starting_edges: Add new edges if starting FROM_PRIOR
 load_model: Whether or not to load a past version of the model
 profiler: Enable profiler
 model: If passed, when using SCALED_DNN or DNN heuristic, the default model of a DNN with hidden layers of size 32, 16, and then 1 with optimizer=params['optimizer'], loss=params['loss'], metrics=['accuracy'] can be overridden for any tf model that has already had
model.compile
called  training_epochs: If pretraining on data, the number of epochs to train for
 pretrain: Whether or not pretraining occurs
 pretrain_data: A list of preprocessed csv's to be trained on
 epochs: Epochs to update the model for after each iteration of training
 batch_size: Batch size for training
 loss_info: Info regarding loss used, improves logging readability
 alpha: Learning rate for DNN (i.e. if a counter was found k steps ago, a point that isn't a counter in this iteration would have a value of $\alpha^k$)
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for RamseyRLReproducibility0.1.tar.gz
Algorithm  Hash digest  

SHA256  f49a0bb75416ac9c7970093a9f6a0b323a4946a265984f52865f629fbb9f01ee 

MD5  65895916394b0d76350e1b7f473d05ae 

BLAKE2b256  290042d0936b0eb4b45817a35df4a3b7f70cdd7f065ef69cda67150ed08dbe61 
Hashes for RamseyRLReproducibility0.1py3noneany.whl
Algorithm  Hash digest  

SHA256  9e1b28c5c04a3b29dcbc0002fbc65551ec1c47670adfc2195f06d12279a04ea1 

MD5  ef1abe02b27666b347150dd14dfd9a33 

BLAKE2b256  fadfec66273bd8114d60971e6010a3907cfe64083893f42b13cf5c7b5784617c 