Generative Modelling
- Form representation (Probability distribution) from training examples
- GAN generate more samples
- Learn Distribution over different training examples
- High Dimensional Probability Distribution
- Simulate futures for Reinforcement Learning
- Realistic Image Generation tasks
- Predicting Next Frame in Video - Adverserial loss to train the image
- Super Resolution - Downsample to half resolution, SRRestNet, SRGAN
- GAN for interactive photo-editing
- Image to Image Translation - Conditional GAN for multi-modal output distributions
- Maximum Likelihood (Write down density function that model describes)
- Distribution controlled by parameters Theta
- Density Functions Types, Accomplishing ML through different approaches
- Explicit / Implicit Density Function
- Markov Chain to estimate Density Function / Gradient
- Procedure to draw samples from the probability distribution
- Explicit formula based on chain
- Wavenet is a Fully Visible Belief Network (Minimise cost function with no approximation)
- VAE (Write Density function where density is intractable)
- Markov Chain Perform poorly in High Dimensional Spaces
- Use a Latent Code
- Asymptotically consistent
- No Markov Chain Needed
Two Models - Adversary of Each Other
Generator - Generates Samples that resemble training distribution
Discriminator - Tool to inspect sample is real / fake (Differentialble function)
Training Procedure
- Sample two different mini batch of data
- Generator minimizes log probability of the discriminator
VAE - High Likelihood use VAE (Designed to maximize likelihood)
DCGAN works very well on faces
Tips and Tricks
- Learning conditional models often gives better samples
- Label Smoothing - Good Regularizer
No comments:
Post a Comment