Virtual Try on AR vs Vision
Paper #1 - Augmented Reality based Virtual Dressing Room using Unity3D
AR Advantages
- AR kit recognizes and tracks a person’s movements using an iOS device’s rear camera.
- A12 bionic chip running iOS 13
- 3D’s Human Body Tracking library
- Model your mesh in a standard T-pose.
- 3D skeleton was generated which imitates human motion in real time
- IOS mobile platform.
In a nutshell, an augmented reality virtual fitting room mobile app for iOS is being developed in conjunction with a human body recognition and motion tracking model.
In your 3D-modeling software package (such as Maya, Cinema4D, or Modo), import the provided skeleton and the custom mesh model that you want to use with AR kit’s Motion Capture functionality
You character should be modeled in a T-pose, your scene should contain only one bind pose, and the rotational values of each joint in your hierarchy should match the values in the provided example skeleton
AR kit’s body-tracking functionality requires models to be in a specific format
To superimpose the clothing over the user's body, we needed a 3D model of the garment, which we created using Blender
Demos - Unity Virtual Fitting Room Full Tutorial + Cloth | Unity, Realtime Tracking, Realsense, Kinect, etc
Face Tracking - Unity Documentation
Augmented Reality for Everyone - Full Course
GO VIRTUAL: NOW YOU CAN BE YOUR OWN STYLE AVATAR - Link
Dense Human Pose Estimation In The Wild - Link
Demo - Link
DensePose - Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body.
Deep Fashion3D: Dataset & Benchmark for Virtual Clothing Try-On and More
- Deep Fashion3D contains 2,078 3D garment models reconstructed from real-world garments in 10 different clothing categories
Paper - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images
- We present Deep Fashion3D, a large-scale repository of 3D clothing models reconstructed from real garments
Sample reconstruction - Link
Paper - Body Capture and Marker-based Garment Reconstruction
Our goal is to generate a 3D model of a person wearing a garment, from multiview RGB videos
- Garment Digitizing: Digitize the garment into a 3D flat mesh.
- Marker Tracking: Track the markers and obtain their 3D locations.
- Body Capture: Reconstruct a body model with accurate shape and pose.
- Garment Reconstruction: Virtually wear the garment on the body
Paper - Image-based Dress-up System
- Skeleton Setting - To establish the necessary correspondences between the model and garment images, we let the user manually select joint positions on the input image with simplified skeleton structures
Paper - Virtual Fitting Solution using 3D Human Modelling and Garments
- Combining multiple deep learning models to create a system that uses all of the models' inferences and produces a single output
- Create a pipeline for integrating 2D based virtual garment fitting solutions in conjunction with 3D reconstruction networks, to visualize the virtual tryon results in 3D
- In Skeleton-based modelling, the identification and analysis of X, Y coordinates
- 3D posture estimate X, Y, and Z coordinates of human body joints are used
- OpenPose initially finds key-points that correspond to each person in the image
- DensePose to estimate 3D postures from a 2D image on a surface-based human model
- Densepose is implemented using multiple combinations of neural networks that combine the regression and classification tasks
- DeepCut provides an approach for detecting and estimating the human body pose
- Graphonomy uses graph transfer learning to generate universal human parsing for several human parsing tasks and using annotations in a better way
- LIP_JPPNet This is deep learning model for body part segmentation and pose detection built using TensorFlow. This network is trained on Look into People (LIP) Dataset
- CIHP_PGN This neural network provides instance level human parsing by using part grouping network.
- Semantic part segmentation, Instance-aware edge detection, refinement, and Instance partition process
- Pose Detection Component
- OpenPose Network Architecture
- Geometric Matching Module
More Git Solutions - Link
Fashion parsing models in TensorFlow
Module: MMM-WeatherDependentClothes
This MagicMirror Module displays Clothes depending on the weather forecast and your personal preferences.
Paper - DEEP LEARNING MEETS FASHION - A LOOK INTO VIRTUAL TRY-ON SOLUTIONS
- Multi-Garment Network’s dataset contains scans, SMPL registration, texture_maps, segmentation_maps, and multi-mesh registered garments
Two base models are used: Multi-Garment Net[4] and Pix2Surf [2]. A third model is used implicitly by MGN and Pix2Surf as a black box. It is the 3D human body reconstruction model SMPL [18]. [2 and [4] use SMPL to create 3D garment templates and redress 3D avatar
Paper - Virtual Garment Imposition using ACGPN
GPN consists of three features.
1. It is a semantic generation module which uses segmentation to map the human body with target clothes.
2. Clothing wrapped module which adjusts the garment images to deformed garment mask.
3.Content Fusion Model which adds the data to previousproduct to quickly discover the generation of the human body structure in the resulting combination layer.
- Clothing Warping (CWM)
- Model Content Fusion (CFM)
- Semantic Generation Module (SGM)
Keep Exploring!!!