Skip to main content

A PyTorch implementation of artistic style transfer

Project description

neural-style-pt

This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. The code is based on Justin Johnson's Neural-Style.

The paper presents an algorithm for combining the content of one image with the style of another image using convolutional neural networks. Here's an example that maps the artistic style of The Starry Night onto a night-time photograph of the Stanford campus:

Applying the style of different images to the same content image gives interesting results. Here we reproduce Figure 2 from the paper, which renders a photograph of the Tubingen in Germany in a variety of styles:

Here are the results of applying the style of various pieces of artwork to this photograph of the golden gate bridge:

Content / Style Tradeoff

The algorithm allows the user to trade-off the relative weight of the style and content reconstruction terms, as shown in this example where we port the style of Picasso's 1907 self-portrait onto Brad Pitt:

Style Scale

By resizing the style image before extracting style features, we can control the types of artistic features that are transfered from the style image; you can control this behavior with the -style_scale flag. Below we see three examples of rendering the Golden Gate Bridge in the style of The Starry Night. From left to right, -style_scale is 2.0, 1.0, and 0.5.

Multiple Style Images

You can use more than one style image to blend multiple artistic styles.

Clockwise from upper left: "The Starry Night" + "The Scream", "The Scream" + "Composition VII", "Seated Nude" + "Composition VII", and "Seated Nude" + "The Starry Night"

Style Interpolation

When using multiple style images, you can control the degree to which they are blended:

Transfer style but not color

If you add the flag -original_colors 1 then the output image will retain the colors of the original image.

Setup:

While you can use Python 2's pip, it's recommended that you use Python 3's pip:

# in a terminal, run the command
pip3 install neural-style

After installing neural-style-pt, you'll need to run the following script to download the default VGG and NIN models:

neural-style -download_models

By default the models are downloaded to your home directory, but you can specify a download location with:

neural-style -download_models <download_path>

This will download the original VGG-19 model. The original VGG-16 model will also be downloaded. By default the original VGG-19 model is used.

If you have a smaller memory GPU then using NIN Imagenet model will be better and gives slightly worse yet comparable results. You can get the details on the model from BVLC Caffe ModelZoo. The NIN model is downloaded when you run neural-style -download_models.

Usage

Basic usage:

neural-style -style_image <image.jpg> -content_image <image.jpg>

cuDNN usage with NIN Model:

neural-style -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -output_image profile.png -model_file models/nin_imagenet.pth -gpu 0 -backend cudnn -num_iterations 1000 -seed 123 -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12 -content_weight 10 -style_weight 500 -image_size 512 -optimizer adam

cuDNN NIN Model Picasso Brad Pitt

To use multiple style images, pass a comma-separated list like this:

-style_image starry_night.jpg,the_scream.jpg.

Note that paths to images should not contain the ~ character to represent your home directory; you should instead use a relative path or a full absolute path.

Options:

  • -image_size: Maximum side length (in pixels) of the generated image. Default is 512.
  • -style_blend_weights: The weight for blending the style of multiple style images, as a comma-separated list, such as -style_blend_weights 3,7. By default all style images are equally weighted.
  • -gpu: Zero-indexed ID of the GPU to use; for CPU mode set -gpu to c.

Optimization options:

  • -content_weight: How much to weight the content reconstruction term. Default is 5e0.
  • -style_weight: How much to weight the style reconstruction term. Default is 1e2.
  • -tv_weight: Weight of total-variation (TV) regularization; this helps to smooth the image. Default is 1e-3. Set to 0 to disable TV regularization.
  • -num_iterations: Default is 1000.
  • -init: Method for generating the generated image; one of random or image. Default is random which uses a noise initialization as in the paper; image initializes with the content image.
  • -init_image: Replaces the initialization image with a user specified image.
  • -optimizer: The optimization algorithm to use; either lbfgs or adam; default is lbfgs. L-BFGS tends to give better results, but uses more memory. Switching to ADAM will reduce memory usage; when using ADAM you will probably need to play with other parameters to get good results, especially the style weight, content weight, and learning rate.
  • -learning_rate: Learning rate to use with the ADAM optimizer. Default is 1e1.

Output options:

  • -output_image: Name of the output image. Default is out.png.
  • -print_iter: Print progress every print_iter iterations. Set to 0 to disable printing.
  • -save_iter: Save the image every save_iter iterations. Set to 0 to disable saving intermediate results.

Layer options:

  • -content_layers: Comma-separated list of layer names to use for content reconstruction. Default is relu4_2.
  • -style_layers: Comma-separated list of layer names to use for style reconstruction. Default is relu1_1,relu2_1,relu3_1,relu4_1,relu5_1.

Other options:

  • -style_scale: Scale at which to extract features from the style image. Default is 1.0.
  • -original_colors: If you set this to 1, then the output image will keep the colors of the content image.
  • -model_file: Path to the .pth file for the VGG Caffe model. Default is the original VGG-19 model; you can also try the original VGG-16 model.
  • -pooling: The type of pooling layers to use; one of max or avg. Default is max. The VGG-19 models uses max pooling layers, but the paper mentions that replacing these layers with average pooling layers can improve the results. I haven't been able to get good results using average pooling, but the option is here.
  • -seed: An integer value that you can specify for repeatable results. By default this value is random for each run.
  • -multidevice_strategy: A comma-separated list of layer indices at which to split the network when using multiple devices. See Multi-GPU scaling for more details.
  • -backend: nn, cudnn, or mkl. Default is nn. mkl requires Intel's MKL backend.
  • -cudnn_autotune: When using the cuDNN backend, pass this flag to use the built-in cuDNN autotuner to select the best convolution algorithms for your architecture. This will make the first iteration a bit slower and can take a bit more memory, but may significantly speed up the cuDNN backend.
  • -download_models: Path to where the VGG-19, VGG-16, and NIN models will be downloaded to. If no path is specified, the models will be downloaded to your home directory.

Frequently Asked Questions

Problem: The program runs out of memory and dies

Solution: Try reducing the image size: -image_size 256 (or lower). Note that different image sizes will likely require non-default values for -style_weight and -content_weight for optimal results. If you are running on a GPU, you can also try running with -backend cudnn to reduce memory usage.

Problem: -backend cudnn is slower than default NN backend

Solution: Add the flag -cudnn_autotune; this will use the built-in cuDNN autotuner to select the best convolution algorithms.

Problem: Get the following error message:

Missing key(s) in state_dict: "classifier.0.bias", "classifier.0.weight", "classifier.3.bias", "classifier.3.weight". Unexpected key(s) in state_dict: "classifier.1.weight", "classifier.1.bias", "classifier.4.weight", "classifier.4.bias".

Solution: Due to a mix up with layer locations, older models require a fix to be compatible with newer versions of PyTorch. Donwloading the models with neural-style -download_models will automatically perform these fixes after downloading the models. You can find other compatible models here.

Memory Usage

By default, neural-style-pt uses the nn backend for convolutions and L-BFGS for optimization. These give good results, but can both use a lot of memory. You can reduce memory usage with the following:

  • Use cuDNN: Add the flag -backend cudnn to use the cuDNN backend. This will only work in GPU mode.
  • Use ADAM: Add the flag -optimizer adam to use ADAM instead of L-BFGS. This should significantly reduce memory usage, but may require tuning of other parameters for good results; in particular you should play with the learning rate, content weight, and style weight. This should work in both CPU and GPU modes.
  • Reduce image size: If the above tricks are not enough, you can reduce the size of the generated image; pass the flag -image_size 256 to generate an image at half the default size.

With the default settings, neural-style-pt uses about 3.7 GB of GPU memory on my system; switching to ADAM and cuDNN reduces the GPU memory footprint to about 1GB.

Speed

Speed can vary a lot depending on the backend and the optimizer. Here are some times for running 500 iterations with -image_size=512 on a Tesla K80 with different settings:

  • -backend nn -optimizer lbfgs: 117 seconds
  • -backend nn -optimizer adam: 100 seconds
  • -backend cudnn -optimizer lbfgs: 124 seconds
  • -backend cudnn -optimizer adam: 107 seconds
  • -backend cudnn -cudnn_autotune -optimizer lbfgs: 109 seconds
  • -backend cudnn -cudnn_autotune -optimizer adam: 91 seconds

Here are the same benchmarks on a GTX 1080:

  • -backend nn -optimizer lbfgs: 56 seconds
  • -backend nn -optimizer adam: 38 seconds
  • -backend cudnn -optimizer lbfgs: 40 seconds
  • -backend cudnn -optimizer adam: 40 seconds
  • -backend cudnn -cudnn_autotune -optimizer lbfgs: 23 seconds
  • -backend cudnn -cudnn_autotune -optimizer adam: 24 seconds

Multi-GPU scaling

You can use multiple CPU and GPU devices to process images at higher resolutions; different layers of the network will be computed on different devices. You can control which GPU and CPU devices are used with the -gpu flag, and you can control how to split layers across devices using the -multidevice_strategy flag.

For example in a server with four GPUs, you can give the flag -gpu 0,1,2,3 to process on GPUs 0, 1, 2, and 3 in that order; by also giving the flag -multidevice_strategy 3,6,12 you indicate that the first two layers should be computed on GPU 0, layers 3 to 5 should be computed on GPU 1, layers 6 to 11 should be computed on GPU 2, and the remaining layers should be computed on GPU 3. You will need to tune the -multidevice_strategy for your setup in order to achieve maximal resolution.

We can achieve very high quality results at high resolution by combining multi-GPU processing with multiscale generation as described in the paper Controlling Perceptual Factors in Neural Style Transfer by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann and Eli Shechtman.

Here is a 4016 x 2213 image generated on a server with eight Tesla K80 GPUs:

The script used to generate this image can be found here.

Implementation details

Images are initialized with white noise and optimized using L-BFGS.

We perform style reconstructions using the conv1_1, conv2_1, conv3_1, conv4_1, and conv5_1 layers and content reconstructions using the conv4_2 layer. As in the paper, the five style reconstruction losses have equal weights.

Citation

If you find this code useful for your research, please cite:

@misc{ProGamerGov2018,
author = {ProGamerGov},
title = {neural-style-pt},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ProGamerGov/neural-style-pt}},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural-style-0.5.7.tar.gz (15.5 kB view details)

Uploaded Source

Built Distributions

neural_style-0.5.7-py3-none-any.whl (16.4 kB view details)

Uploaded Python 3

neural_style-0.5.7-py2.py3-none-any.whl (16.4 kB view details)

Uploaded Python 2 Python 3

neural_style-0.5.7-py2-none-any.whl (16.4 kB view details)

Uploaded Python 2

File details

Details for the file neural-style-0.5.7.tar.gz.

File metadata

  • Download URL: neural-style-0.5.7.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.9.1 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.5.2

File hashes

Hashes for neural-style-0.5.7.tar.gz
Algorithm Hash digest
SHA256 28234f2116d1f9d11b22081463f9597825cf88bbe524edbb614f45b15054020c
MD5 499ffd8a5fafc412fe4eba172ee4d827
BLAKE2b-256 f90672be88a4f63b51e47fcdace7ee030ccca7925b3c704e07302dbec0dc3e22

See more details on using hashes here.

File details

Details for the file neural_style-0.5.7-py3-none-any.whl.

File metadata

  • Download URL: neural_style-0.5.7-py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.9.1 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.5.2

File hashes

Hashes for neural_style-0.5.7-py3-none-any.whl
Algorithm Hash digest
SHA256 47b0944c57d371387c13770a64a43a0829f25453c56e6b3a16bce1402e212dd1
MD5 ee775b455e2cbb22259cc180180339e9
BLAKE2b-256 8af906c8d79db0264f041c10649e9fde30390eb0f17aca6b756d7490dbb0f454

See more details on using hashes here.

File details

Details for the file neural_style-0.5.7-py2.py3-none-any.whl.

File metadata

  • Download URL: neural_style-0.5.7-py2.py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.9.1 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.5.2

File hashes

Hashes for neural_style-0.5.7-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 74105bdb79d0df33fb3c4c5a6db50bba22e4601e3094293ad38f99d93e26e2cd
MD5 fdf6e77d6231d9eb61662e256faba6b1
BLAKE2b-256 886bdb87dc3244b832dfba32c8814c11a11dd8e5d925cfc83102ec871f587939

See more details on using hashes here.

File details

Details for the file neural_style-0.5.7-py2-none-any.whl.

File metadata

  • Download URL: neural_style-0.5.7-py2-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.9.1 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.5.2

File hashes

Hashes for neural_style-0.5.7-py2-none-any.whl
Algorithm Hash digest
SHA256 30c7755b695823befaced59f401abef792d53232092c1bfd60ac2b3a1ed21298
MD5 c2e9812408675cce23bfd7f42787b64e
BLAKE2b-256 0ce6b8cff2d1599912e74865ab6c67b6a8a0fc2b4fab810165c8cfd49724c879

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page