Paper #1 - Real-time deep hair matting on mobile devices
Key Notes
- Hand-crafted features for segmentation.
- Employ simple pixel-wise color models to classify hair.
- Fully Convolutional MobileNet Architecture for Hair Segmentation
- HairSegNet
- Pre-trained weights on ImageNet, we dilate all the kernels for the layers with updated resolution by their scale factor
- Upsampling is performed by a simplified version of an inverted MobileNet architecture
- a loss function that promotes perceptually accurate matting output
- HairMatteNet runs twice as fast compared to HairSegNet
Paper #2 - Intuitive, Interactive Beard and Hair Synthesis with Generative Models
Key notes
- Edge detection or image gradients would be an intuitive approach
- Generative adversarial networks (GANs)
- Two-stage pipeline
- First stage focuses on synthesizing realistic facial hair
- Texture synthesis techniques
- pixel-based methods
- stitching-based methods
Generative adversarial networks (GANs) [26] has inspired a large body of high-quality image synthesis and editing approaches
Two Stage Network
- The first stage synthesizes the hair in this region.
- The second stage refines and composites the synthesized hair into the input image.
Close-up images of high-resolution complex structures fail to capture all the complexity of the hair structure, limiting the plausibility of the synthesized images
Paper - Link
- Generator embeds the input latent code into an intermediate latent space
Deep Convolutional Generative Adversarial Network
- During training, the generator progressively becomes better at creating images that look real, while the discriminator becomes better at telling them apart.
- The process reaches equilibrium when the discriminator can no longer distinguish real images from fakes.
- Both the generator and discriminator are defined using the Keras Sequential API.
How to code a Generative Adversarial Network (GAN) in Python
Paper #3 - PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
- Our primary contribution is a training methodology for GANs where we start with low-resolution images, and then progressively increase the resolution by adding layers to the networks
- When training the discriminator, we feed in real images that are downscaled to match the current resolution of the network
- MULTI-SCALE STATISTICAL SIMILARITY FOR ASSESSING GAN RESULTS
- Intuitively a small Wasserstein distance indicates that the distribution of the patches is similar
Paper #4 - Training Language GANs from Scratch
- Generator architecture and reward structure
- Large Batch Sizes for Variance Reduction
- Unlike image GANs, ScratchGAN learns an explicit model of data
- Generative Adversarial Network (GAN) colab
- Conditional_gan colab
- InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing
- StyleGAN — Official TensorFlow Implementation
- Hairstyle Transfer — Semantic Editing GAN Latent Code
CelebHair: A New Large-Scale Dataset for Hairstyle Recommendation based on CelebA
More Reads
- Hair Color Digitization through Imaging and Deep Inverse Graphics
- Intuitive, Interactive Beard and Hair Synthesis with Generative Models
- Progressive Color Transfer with Dense Semantic Correspondences
- Local Facial Attribute Transfer through Inpainting
- Generative Single Image Reflection Separation
- ReflectNet - A Generative Adversarial Method for Single Image Reflection Suppression
- A Style-Based Generator Architecture for Generative Adversarial Networks
- K-HAIRSTYLE: A LARGE-SCALE KOREAN HAIRSTYLE DATASET FOR VIRTUAL HAIR EDITING AND HAIRSTYLE CLASSIFICATION
- AttGAN: Facial Attribute Editing by Only Changing What You Want
- HairCLIP: Design Your Hair by Text and Reference Image
- fAshIon after fashion: A Report of AI in Fashion
- Keypoints-Based 2D Virtual Try-on Network System
- StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
- VOGUE: Try-On by StyleGAN Interpolation Optimization
- Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing
Keep Thinking!!!
No comments:
Post a Comment