- Data as input, Feature extraction done on data, Patterns learnt from the data to predict decisions (Prediction / Action)
- Generate new synthetic data (GAN)
- Universal Approximation Theorem - A feedforward network with a single layer is sufficient to approximate an arbitrary precision, any continuous function (1989) - Any problem reduced to set of inputs and outputs
- Neural networks because of non-comvex structure training is difficult
- Understanding Deep Neural Networks requires rethinking generalizations
- Disparity between training and testing means not able to generalize only memorize
- Modern Deep Networks can perfectly fit random data
- Neural networks are excellent function approximators
- Modify pixels in specific location to decrease accuracy as much as possible
- Data Hungry
- Computationally intensive
- Fooled by Adversarial examples
- Poor at representing uncertainty
- Uninterpretable black boxes
- Notion of probability/uncertainty is different
- NN trained to produce probability at output, They are not trained to produce uncertainty values
- Rewrite posterior using Bayes rules, In practice difficult to compute
- Approximate through sampling
- Elementside Dropout for Uncertainty
- Dropout as a way to produce reliable uncertainty for Neural Network
- Variance over output given uncertainty measure
- Automated ML framework that will learn to learn
- Controller (RNN) - Sample architectures of NN
- Training Data - Sampled Network - Predicted Labels
- Design AI Algo that can build new models capable of solving the task
Happy Mastering DL!!!
No comments:
Post a Comment