Chemical and Pharmaceutical Autoencoder - Providing reproducible modelling for quantum chemistry
Project description
aiarc
Why
I prototyped novel and good algorithms for chemistry in 2021 and it took me two years to get them publication ready. Of course I did other stuff like thesis, work, exams etc. However, the development time was far too long.
What
Preliminaries and Learnings
- I need my own framework
- I learned from aiarc that I need more flexibility for the major parts
- I need a directory structure and saving structure to properly distribute the files
- there could be an initializer
- methods should be imported easier through the init files
- Testing should be more rigorous (in each new project)
The package is aimed to investigate algorithms faster
The focus is on making algorithms and sparing out the rest
Thus the focus is on an easy interface for investigation. PytorchGeometric gives a good guideline on how to do this. It is more though, and also about producing plots to score the overall algorithm.
aiarc
is aimed to improve rapid prototyping of AI algorithms on graphs (timeseries, molecules, networks etc) for building novel simulation applications fast and at scale.
The package shall provide production grade algorithms in an automated way but with user interaction.
Lessons learned from other projects
The lessons I learned from other projects were
- Never trust propietary software that is used for politics, e.g. Microsoft, Nvidia drivers, Canonical etc
- Avoid trusting open-source software that is aimed to upsell a paid software distribution (e.g. Canonical, or security software)
- Code high quality instead of coding fast -> is faster in the end
- Try to code consistently instead of a lot at once
How it should work
Work on a project base -> provide a framework to start a project A project needs differnt things
- Configuration
- training
- Monitoring
- Logging
- Analysis
Foundations
I am having some years of experience now, trying to model in an regulatory environment such as drug design, risk assessement, health care and chemistry.
Packages I have built in that context are
- chembee
- aiarc
An architectural pattern became prevalent such that I found it in torchsr
, too (which had mutiple years of development, too)
- torchsr
Now, with aiarc
we aim to get modelling to the next stage.
Packages to use
- jax
- pytorch-gym (?)
- torch
torchsr
Super-Resolution Networks for Pytorch
Super-resolution is a process that increases the resolution of an image, adding additional details. Methods using neural networks give the most accurate results, much better than other interpolation methods. With the right training, it is even possible to make photo-realistic images.
For example, here is a low-resolution image, magnified x4 by a neural network, and a high resolution image of the same object:
In this repository, you will find:
- the popular super-resolution networks, pretrained
- common super-resolution datasets
- a unified training script for all models
Models
The following pretrained models are available. Click on the links for the paper:
Newer and larger models perform better: the most accurate models are EDSR (huge), RCAN and NinaSR-B2. For practical applications, I recommend a smaller model, such as NinaSR-B1.
Expand benchmark results
Set5 results
Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) |
---|---|---|---|---|
carn | 1.59 | 37.88 / 0.9600 | 34.32 / 0.9265 | 32.14 / 0.8942 |
carn_m | 0.41 | 37.68 / 0.9594 | 34.06 / 0.9247 | 31.88 / 0.8907 |
edsr_baseline | 1.37 | 37.98 / 0.9604 | 34.37 / 0.9270 | 32.09 / 0.8936 |
edsr | 40.7 | 38.19 / 0.9609 | 34.68 / 0.9293 | 32.48 / 0.8985 |
ninasr_b0 | 0.10 | 37.72 / 0.9594 | 33.96 / 0.9234 | 31.77 / 0.8877 |
ninasr_b1 | 1.02 | 38.14 / 0.9609 | 34.48 / 0.9277 | 32.28 / 0.8955 |
ninasr_b2 | 10.0 | 38.21 / 0.9612 | 34.61 / 0.9288 | 32.45 / 0.8973 |
rcan | 15.4 | 38.27 / 0.9614 | 34.76 / 0.9299 | 32.64 / 0.9000 |
rdn | 22.1 | 38.12 / 0.9609 | 33.98 / 0.9234 | 32.35 / 0.8968 |
Set14 results
Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) |
---|---|---|---|---|
carn | 1.59 | 33.57 / 0.9173 | 30.30 / 0.8412 | 28.61 / 0.7806 |
carn_m | 0.41 | 33.30 / 0.9151 | 30.10 / 0.8374 | 28.42 / 0.7764 |
edsr_baseline | 1.37 | 33.57 / 0.9174 | 30.28 / 0.8414 | 28.58 / 0.7804 |
edsr | 40.7 | 33.95 / 0.9201 | 30.53 / 0.8464 | 28.81 / 0.7872 |
ninasr_b0 | 0.10 | 33.24 / 0.9144 | 30.02 / 0.8355 | 28.28 / 0.7727 |
ninasr_b1 | 1.02 | 33.71 / 0.9189 | 30.41 / 0.8437 | 28.71 / 0.7840 |
ninasr_b2 | 10.0 | 34.00 / 0.9206 | 30.53 / 0.8461 | 28.80 / 0.7863 |
rcan | 15.4 | 34.13 / 0.9216 | 30.63 / 0.8475 | 28.85 / 0.7878 |
rdn | 22.1 | 33.71 / 0.9182 | 30.07 / 0.8373 | 28.72 / 0.7846 |
DIV2K results (validation set)
Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | 8x (PSNR/SSIM) |
---|---|---|---|---|---|
carn | 1.59 | 36.08 / 0.9451 | 32.37 / 0.8871 | 30.43 / 0.8366 | N/A |
carn_m | 0.41 | 35.76 / 0.9429 | 32.09 / 0.8827 | 30.18 / 0.8313 | N/A |
edsr_baseline | 1.37 | 36.13 / 0.9455 | 32.41 / 0.8878 | 30.43 / 0.8370 | N/A |
edsr | 40.7 | 36.56 / 0.9485 | 32.75 / 0.8933 | 30.73 / 0.8445 | N/A |
ninasr_b0 | 0.10 | 35.77 / 0.9428 | 32.06 / 0.8818 | 30.09 / 0.8293 | 26.60 / 0.7084 |
ninasr_b1 | 1.02 | 36.35 / 0.9471 | 32.51 / 0.8892 | 30.56 / 0.8405 | 26.96 / 0.7207 |
ninasr_b2 | 10.0 | 36.52 / 0.9482 | 32.73 / 0.8926 | 30.73 / 0.8437 | 27.07 / 0.7246 |
rcan | 15.4 | 36.61 / 0.9489 | 32.78 / 0.8935 | 30.73 / 0.8447 | 27.17 / 0.7292 |
rdn | 22.1 | 36.32 / 0.9468 | 32.04 / 0.8822 | 30.61 / 0.8414 | N/A |
B100 results
Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) |
---|---|---|---|---|
carn | 1.59 | 32.12 / 0.8986 | 29.07 / 0.8042 | 27.58 / 0.7355 |
carn_m | 0.41 | 31.97 / 0.8971 | 28.94 / 0.8010 | 27.45 / 0.7312 |
edsr_baseline | 1.37 | 32.15 / 0.8993 | 29.08 / 0.8051 | 27.56 / 0.7354 |
edsr | 40.7 | 32.35 / 0.9019 | 29.26 / 0.8096 | 27.72 / 0.7419 |
ninasr_b0 | 0.10 | 31.97 / 0.8974 | 28.90 / 0.8000 | 27.36 / 0.7290 |
ninasr_b1 | 1.02 | 32.24 / 0.9004 | 29.13 / 0.8061 | 27.62 / 0.7377 |
ninasr_b2 | 10.0 | 32.32 / 0.9014 | 29.23 / 0.8087 | 27.71 / 0.7407 |
rcan | 15.4 | 32.39 / 0.9024 | 29.30 / 0.8106 | 27.74 / 0.7429 |
rdn | 22.1 | 32.25 / 0.9006 | 28.90 / 0.8004 | 27.66 / 0.7388 |
Urban100 results
Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) |
---|---|---|---|---|
carn | 1.59 | 31.95 / 0.9263 | 28.07 / 0.849 | 26.07 / 0.78349 |
carn_m | 0.41 | 31.30 / 0.9200 | 27.57 / 0.839 | 25.64 / 0.76961 |
edsr_baseline | 1.37 | 31.98 / 0.9271 | 28.15 / 0.852 | 26.03 / 0.78424 |
edsr | 40.7 | 32.97 / 0.9358 | 28.81 / 0.865 | 26.65 / 0.80328 |
ninasr_b0 | 0.10 | 31.33 / 0.9204 | 27.48 / 0.8374 | 25.45 / 0.7645 |
ninasr_b1 | 1.02 | 32.48 / 0.9319 | 28.29 / 0.8555 | 26.25 / 0.7914 |
ninasr_b2 | 10.0 | 32.91 / 0.9354 | 28.70 / 0.8640 | 26.54 / 0.8008 |
rcan | 15.4 | 33.19 / 0.9372 | 29.01 / 0.868 | 26.75 / 0.80624 |
rdn | 22.1 | 32.41 / 0.9310 | 27.49 / 0.838 | 26.36 / 0.79460 |
All models are defined in torchsr.models
. Other useful tools to augment your models, such as self-ensemble methods and tiling, are present in torchsr.models.utils
.
Datasets
The following datasets are available. Click on the links for the project page:
All datasets are defined in torchsr.datasets
. They return a list of images, with the high-resolution image followed by downscaled or degraded versions.
Data augmentation methods are provided in torchsr.transforms
.
Datasets are downloaded automatically when using the download=True
flag, or by running the corresponding script i.e. ./scripts/download_div2k.sh
.
Usage
from torchsr.datasets import Div2K
from torchsr.models import ninasr_b0
from torchvision.transforms.functional import to_pil_image, to_tensor
# Div2K dataset
dataset = Div2K(root="./data", scale=2, download=False)
# Get the first image in the dataset (High-Res and Low-Res)
hr, lr = dataset[0]
# Download a pretrained NinaSR model
model = ninasr_b0(scale=2, pretrained=True)
# Run the Super-Resolution model
lr_t = to_tensor(lr).unsqueeze(0)
sr_t = model(lr_t)
sr = to_pil_image(sr_t.squeeze(0))
sr.show()
Expand more examples
from torchsr.datasets import Div2K
from torchsr.models import edsr, rcan
from torchsr.models.utils import ChoppedModel, SelfEnsembleModel
from torchsr.transforms import ColorJitter, Compose, RandomCrop
# Div2K dataset, cropped to 256px, width color jitter
dataset = Div2K(
root="./data", scale=2, download=False,
transform=Compose([
RandomCrop(256, scales=[1, 2]),
ColorJitter(brightness=0.2)
]))
# Pretrained RCAN model, with tiling for large images
model = ChoppedModel(
rcan(scale=2, pretrained=True), scale=2,
chop_size=400, chop_overlap=10)
# Pretrained EDSR model, with self-ensemble method for higher quality
model = SelfEnsembleModel(edsr(scale=2, pretrained=True))
Training
A script is available to train the models from scratch, evaluate them, and much more. It is not part of the pip package, and requires additional dependencies. More examples are available in scripts/
.
pip install piq tqdm tensorboard # Additional dependencies
python -m torchsr.train -h
python -m torchsr.train --arch edsr_baseline --scale 2 --download-pretrained --images test/butterfly.png --destination results/
python -m torchsr.train --arch edsr_baseline --scale 2 --download-pretrained --validation-only
python -m torchsr.train --arch edsr_baseline --scale 2 --epochs 300 --loss l1 --dataset-train div2k_bicubic
You can evaluate models from the command line as well. For example, for EDSR with the paper's PSNR evaluation:
python -m torchsr.train --validation-only --arch edsr_baseline --scale 2 --dataset-val set5 --chop-size 400 --download-pretrained --shave-border 2 --eval-luminance
Acknowledgements
Thanks to the people behind torchvision and EDSR, whose work inspired this repository. Some of the models available here come from EDSR-PyTorch and CARN-PyTorch.
To cite this work, please use:
@misc{torchsr,
author = {Gabriel Gouvine},
title = {Super-Resolution Networks for Pytorch},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Coloquinte/torchSR}},
doi = {10.5281/zenodo.4868308}
}
@misc{ninasr,
author = {Gabriel Gouvine},
title = {NinaSR: Efficient Small and Large ConvNets for Super-Resolution},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Coloquinte/torchSR/blob/main/doc/NinaSR.md}},
doi = {10.5281/zenodo.4868308}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aiarc-0.0.2.tar.gz
.
File metadata
- Download URL: aiarc-0.0.2.tar.gz
- Upload date:
- Size: 51.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e7cae11635b597e46c814a346c2b76f0fe7871991571e98ba68525e5db1ad5e7 |
|
MD5 | 194138e29d99a16053fed73cd4407ae7 |
|
BLAKE2b-256 | afe3110bcb30a89d8f503f5eaa612214be544449148b619cd96b04dda3d3946a |
File details
Details for the file aiarc-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: aiarc-0.0.2-py3-none-any.whl
- Upload date:
- Size: 61.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd578b110628ffa21a2d90473f973f32d80c6be3642ff2e927ba9d3bdd5379b1 |
|
MD5 | b9a33265e41a55c928274ec0398c80a5 |
|
BLAKE2b-256 | 59a13ce1ad7ce2b9de779ae89726c96ddb8f701587c157ce6dc27a6b8b6860c3 |