No project description provided
Project description
NASBench-PyTorch
NASBench-PyTorch is a PyTorch implementation of the search space
NAS-Bench-101 including the training of the networks**. The original
implementation is written in TensorFlow, and this projects contains
some files from the original repository (in the directory
nasbench_pytorch/model/
).
Important: if you want to reproduce the original results, please refer to the Reproducibility section.
Overview
A PyTorch implementation of training of NAS-Bench-101 dataset: NAS-Bench-101: Towards Reproducible Neural Architecture Search. The dataset contains 423,624 unique neural networks exhaustively generated and evaluated from a fixed graph-based search space.
Usage
You need to have PyTorch installed.
You can install the package by running pip install nasbench_pytorch
. The second possibility is to install from source code:
- Clone this repo
git clone https://github.com/romulus0914/NASBench-PyTorch
cd NASBench-PyTorch
- Install the project
pip install -e .
The file main.py
contains an example training of a network. To see
the different parameters, run:
python main.py --help
Train a network by hash
To train a network whose architecture is queried from NAS-Bench-101 using its unique hash, install the original nasbench repository. Follow the instructions in the README, note that you need to install TensorFlow. If you need TensorFlow 2.x, install this fork of the repository instead.
Then, you can get the PyTorch architecture of a network like this:
from nasbench_pytorch.model import Network as NBNetwork
from nasbench import api
nasbench_path = '$path_to_downloaded_nasbench'
nb = api.NASBench(nasbench_path)
net_hash = '$some_hash' # you can get hashes using nasbench.hash_iterator()
m = nb.get_metrics_from_hash(net_hash)
ops = m[0]['module_operations']
adjacency = m[0]['module_adjacency']
net = NBNetwork((adjacency, ops))
Then, you can train it just like the example network in main.py
.
Architecture
Example architecture (picture from the original repository)
Reproducibility
The code should closely match the TensorFlow version (including the hyperparameters), but there are some differences:
-
RMSProp implementation in TensorFlow and PyTorch is different
- For more information refer to here and here.
- Optionally, you can install pytorch-image-models where a TensorFlow-like RMSProp is implemented
pip install timm
- Then, pass
--optimizer rmsprop_tf
tomain.py
to use it
-
You can turn gradient clipping off by setting
--grad_clip_off True
-
The original training was on TPUs, this code enables only GPU and CPU training
-
Input data augmentation methods are the same, but due to randomness they are not applied in the same manner
- Cause: Batches and images cannot be shuffled as in the original TPU training, and the augmentation seed is also different
-
Results may still differ due to TensorFlow/PyTorch implementation differences
Refer to this issue for more information and for comparison with API results.
Disclaimer
Modified from NASBench: A Neural Architecture Search Dataset and Benchmark. graph_util.py and model_spec.py are directly copied from the original repo. Original license can be found here.
**Please note that this repo is only used to train one possible architecture in the search space, not to generate all possible graphs and train them.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for nasbench_pytorch-1.3.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6908b42c88ae288ec048f856a59a96c4792d6ef8ed40e8a07168e337e103fa6a |
|
MD5 | 72d6a3183ef03a551faf70d7d5a9da94 |
|
BLAKE2b-256 | 11c18bcb513fccfceff454499f342a023d8e9d7c963d79265d31fe32201ebc16 |