Skip to main content

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration

Project description

download PyPI Open issue Closed issue LICENSE python lint Publish-pip

  1. :boom: Updated online demo: Replicate. Here is the backup.
  2. :boom: Updated online demo: Huggingface Gradio
  3. Colab Demo for GFPGAN google colab logo; (Another Colab Demo for the original paper model)

:rocket: Thanks for your interest in our work. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN :blush:

GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration.
It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration.

:question: Frequently Asked Questions can be found in FAQ.md.

:triangular_flag_on_post: Updates

  • :white_check_mark: Add CodeFormer (CC BY-NC-SA 4.0 License) and RestoreFormer.
  • :white_check_mark: Add V1.4 model, which produces slightly more details and better identity than V1.3.
  • :white_check_mark: Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs. See more in Model zoo, Comparisons.md
  • :white_check_mark: Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.
  • :white_check_mark: Support enhancing non-face regions (background) with Real-ESRGAN.
  • :white_check_mark: We provide a clean version of GFPGAN, which does not require CUDA extensions.
  • :white_check_mark: We provide an updated model without colorizing faces.

If GFPGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush: Other recommended projects:
:arrow_forward: Real-ESRGAN: A practical algorithm for general image restoration
:arrow_forward: BasicSR: An open-source image and video restoration toolbox
:arrow_forward: facexlib: A collection that provides useful face-relation functions
:arrow_forward: HandyView: A PyQt5-based image viewer that is handy for view and comparison


:book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior

[Paper]   [Project Page]   [Demo]
Xintao Wang, Yu Li, Honglun Zhang, Ying Shan
Applied Research Center (ARC), Tencent PCG


:wrench: Dependencies and Installation

Installation

We now provide a clean version of GFPGAN, which does not require customized CUDA extensions.
If you want to use the original model in our paper, please see PaperModel.md for installation.

  1. Clone repo

    git clone https://github.com/TencentARC/GFPGAN.git
    cd GFPGAN
    
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    
    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib
    
    pip install -r requirements.txt
    python setup.py develop
    
    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan
    

:zap: Quick Inference

We take the v1.3 version for an example. More models can be found here.

Download pre-trained models: GFPGANv1.3.pth

wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models

Inference!

python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto

If you want to use the original model in our paper, please see PaperModel.md for installation and inference.

:european_castle: Model Zoo

Version Model Name Description
V1.3 GFPGANv1.3.pth Based on V1.2; more natural restoration results; better results on very low-quality / high-quality inputs.
V1.2 GFPGANCleanv1-NoCE-C2.pth No colorization; no CUDA extensions are required. Trained with more data with pre-processing.
V1 GFPGANv1.pth The paper model, with colorization.

The comparisons are in Comparisons.md.

Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.

Version Strengths Weaknesses
V1.3 ✓ natural outputs
✓better results on very low-quality inputs
✓ work on relatively high-quality inputs
✓ can have repeated (twice) restorations
✗ not very sharp
✗ have a slight change on identity
V1.2 ✓ sharper output
✓ with beauty makeup
✗ some outputs are unnatural

You can find more models (such as the discriminators) here: [Google Drive], OR [Tencent Cloud 腾讯微云]

:computer: Training

We provide the training codes for GFPGAN (used in our paper).
You could improve it according to your own needs.

Tips

  1. More high quality faces can improve the restoration quality.
  2. You may need to perform some pre-processing, such as beauty makeup.

Procedures

(You can try a simple version ( options/train_gfpgan_v1_simple.yml) that does not require face component landmarks.)

  1. Dataset preparation: FFHQ

  2. Download pre-trained models and other data. Put them in the experiments/pretrained_models folder.

    1. Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth
    2. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth
    3. A simple ArcFace model: arcface_resnet18.pth
  3. Modify the configuration file options/train_gfpgan_v1.yml accordingly.

  4. Training

python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch

:scroll: License and Acknowledgement

GFPGAN is released under Apache License Version 2.0.

BibTeX

@InProceedings{wang2021gfpgan,
    author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
    title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

:e-mail: Contact

If you have any question, please email xintao.wang@outlook.com or xintaowang@tencent.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gfpgan-1.3.6.tar.gz (101.9 kB view details)

Uploaded Source

Built Distribution

gfpgan-1.3.6-py3-none-any.whl (62.0 kB view details)

Uploaded Python 3

File details

Details for the file gfpgan-1.3.6.tar.gz.

File metadata

  • Download URL: gfpgan-1.3.6.tar.gz
  • Upload date:
  • Size: 101.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for gfpgan-1.3.6.tar.gz
Algorithm Hash digest
SHA256 3a29f6629a9a2c3f80623c4d48dbaa3f0d80d15ff326714911e9687411024883
MD5 9015f128f0016bac044748ceda674ca7
BLAKE2b-256 1970587a91bb9ab921b3a5d76d6e1a19709d8b8ebba29b595f3ed57fcdd95b8f

See more details on using hashes here.

File details

Details for the file gfpgan-1.3.6-py3-none-any.whl.

File metadata

  • Download URL: gfpgan-1.3.6-py3-none-any.whl
  • Upload date:
  • Size: 62.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for gfpgan-1.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 875c5404363656214d314299f46b910ae377147ae25f5124f4dca357fd60ea6b
MD5 14bcced76add9faebc63ba4e1de78e91
BLAKE2b-256 7d34a708e205ec90e3639d61599f1fd305bf52da68676bf6d632cfafcaf0dcd9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page