Paper #1 - Detailed Garment Recovery from a Single-View Image
Key Notes
- Global shape and geometry of the clothing
- Extract occluded wrinkles and folds
- Parameter estimation, semantic parsing, shape recovery, and physics-based cloth simulation
The current implementation of our approach depends on two
- databases: a database of commonly available garment templates
- database of human-body models.
Implementation Notes
- 14 joint positions on the image and provides a rough sketch outlining the human body silhouette
- Semantic parse of the garments in the image to identify and localize depicted clothing items
Human body - We follow the PCA encoding of the human body shape presented in [Hasler et al. 2009]. The semantic parameters include gender, height, weight, muscle percentage, breast girth, waist girth, hip girth, thigh girth, calf girth, shoulder height, and leg length
Garment Parsing
- we extract the clothing regions Ωb,h,g by performing a two-stage image segmentation guided by user sketch
- Initial garment registration results. We fit garments to human bodies with different body shapes and poses.
Implementation - We have implemented our algorithm in C++ and demonstrated the effectiveness of our approach throughout the paper
Paper #2 - M2E-Try On Net: Fashion from Model to Everyone
- Pose alignment network (PAN) - pose alignment network (PAN) to align the model and clothes pose to the target pose
- Texture refinement network (TRN) - enrich the textures and logo patterns to the desired clothes
- Fitting network (FTN) - merge the transferred garments to the target person images.
- Unsupervised learning and self supervised learning to accomplish this task.
- Generative adversarial network (GAN) [9] has been used for image-based generation
- GAN has been used for person image generation [18] to generate the human image from pose representation
- For fashion image generation, a more intuitive way is to generate images from a person image and the desired clothes image
Dataset - Deep Fashion [17] Women Tops dataset, MVC [16] Women Tops dataset and MVC [16]
PAN
- PAN as a conditional generative module
- To train PAN, ideally we need to have a training triplet with paired images: model image M, person image P, and pose aligned model image
- self-supervised training method that uses images of the same person in two different poses to supervise Pose Alignment Network (PAN)
Texture Refinement Network (TRN)
- Combine the information from network generated images and texture preserved images produced by geometric transformation
- Loss - Reconstruction loss, perceptual loss and style loss are used only for paired training
Paper #3 - VITON-GAN: Virtual Try-on Image Generator
Trained with Adversarial Loss
Code - Link
Paper #4 - GarmentGAN: Photo-realistic Adversarial Fashion Transfer
This method divides the image generation task into two sub-tasks: segmentation map synthesis and transference of the clothing characteristics onto the previously generated map
The system comprises two separate GANs: a shape transfer network and an appearance transfer network
More Reads
- GarmentGAN: Photo-realistic Adversarial Fashion Transfer
- Mining Fashion Outfit Composition Using An End-to-End Deep Learning Approach on Set Data
- Algorithmic clothing: hybrid recommendation, from street-style-to-shop
- Aesthetic-based Clothing Recommendation
Keep Thinking!!!
No comments:
Post a Comment