Create artificial artwork by transfering the appearance of one image (eg a famous painting) to another user-supplied image (eg your favourite photograph).
Project description
Neural Style Transfer - CLI
Create artificial artwork by transfering the appearance of one image (eg a famous painting) to another user-supplied image (eg your favourite photograph).
Uses a Neural Style Transfer algorithm to transfer the appearance, which you can run though a CLI program.
Neural Style Tranfer (NST) is an algorithm that applies the style of an image to the contents of another and produces a generated image. The idea is to find out how someone, with the painting style shown in one image, would depict the contents shown in another image.
NST takes a content image (eg picture taken with your camera) and a style image (eg a picture of a Van Gogh painting) and produces the generated image.
This Python package runs a Neural Style Tranfer algorithm on input content and style images to produce generated images.
tests |
|
---|---|
package |
|
containerization |
|
code quality |
Overview
This package exposes a configurable NST algorithm via a convenient CLI program.
Key features of the package:
Selection of style layers at runtime
Iterative Learning Algorithm using the VGG Deep Neural Network
Selection of iteration termination condition at runtime
Fast minimization of loss/cost function with parallel/multicore execution, using Tensorflow
Persisting of generated images
Installation
Sample commands to install the NST CLI from source, using a terminal:
# Get the Code git clone https://github.com/boromir674/neural-style-transfer.git cd neural-style-transfer # Activate a python virtual environment virtualenv env --python=python3 source env/bin/activate # Install dependencies pip install -r requirements/dex.txt # Install NST CLI (in virtual environment) pip install -e .
Alternative command to install the NST CLI by downloading the artificial_artwork python package from pypi:
pip install artificial_artwork
Make the cli available for your host system:
# Setup a symbolic link (in your host system) in a location in your PATH # Assuming ~/.local/bin is in your PATH ln -s $PWD/env/bin/neural-style-transfer ~/.local/bin/neural-style-transfer # Deactivate environment since the symbolic link is available in "global scope" by now deactivate
Usage
Download the Vgg-Verydeep-19 pretrained model from https://drive.protonmail.com/urls/7RXGN23ZRR#hsw4STil0Hgc.
Exctract the model (weights and layer architecture).
For example use tar -xvf imagenet-vgg-verydeep-19.tar to extract in the current directory.
Indicate to the program where to find the model:
export AA_VGG_19=$PWD/imagenet-vgg-verydeep-19.mat
We have included one ‘content’ and one ‘style’ image in the source repository, to facilitate testing. You can use these images to quickly try running the program.
For example, you can get the code with git clone git@github.com:boromir674/neural-style-transfer.git, then cd neural-style-transfer.
Assuming you have installed using a symbolic link in your PATH (as shown above), or if you are still operating withing your virtual environment, then you can create artificial artwork with the following command.
The algorithm will apply the style to the content iteratively. It will iterate 100 times.
# Create a directory where to store the artificial artwork mkdir nst_output # Run a Neural Style Algorithm for 100 iterations and store output to nst_output directory neural-style-transfer tests/data/canoe_water.jpg tests/data/blue-red-w400-h300.jpg --location nst_output
Note we are using as ‘content’ and ‘style’ images jpg files included in the distribution (artificial-artwork package). We are using a photo of a canoe on water and an abstract painting with prevalence of blue and red color shades.
Also note that to demonstrate quicker, both images have been already resized to just 400 pixels of width and 300 of height each.
Navigating to nst_output you can find multiple image files generated from running the algorithm. Each file corresponds to the image generated on a different iteration while running the algorithm. The bigger the iteration the more “style” has been applied.
Check out your artificial artwork!
Docker image
We have included a docker file that we use to build an image where both the artificial_artwork package (source code) and the pretrained model are present. That way you can immediately start creating artwork!
docker pull boromir674/neural-style-transfer export NST_OUTPUT=/home/$USER/nst-output CONTENT=/path/to/content-image.jpg STYLE=/path/to/style-image.jpg docker run -it --rm -v $NST_OUTPUT:/nst-output boromir674/neural-style-transfer $STYLE $CONTENT --iteratins 200 --location /nst-output
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for artificial_artwork-1.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f26d6fcc97b8f61a95c89e9c67fdf207b5659f10ef4abe97ad73ffb7f8951432 |
|
MD5 | 0f2d13e11dec4c5ec033746c28d9be33 |
|
BLAKE2b-256 | 4c8df1ccbfff43587ca5333279fc1013082dba662b47d988d41f3bca2bc74e80 |