- Deep NN for Medical Decisions
- Deep NN for Understanding Scenes
- Deep NN for Visual Recognition
- Explanation for Algorithmic decision to deploy models
- Accuracy metrics, Feature space
Understanding CNN individual Units
- CNN Layers
- Sources of Data & Training
- Visualize internal representations
- Back Propagation
- Back project Image Space
- Iteratively use gradient to visualize / activate units
- Lower Layer capture textures
- Higher layer detect parts of objects
Data Driver Visualization
- Sample-based
- Visualize units at each layer
- Top Activated Images
Network Dissection
- Framework to quantify interpretability
- Intersection over union = Areas of Overlap / Area of Union
- RNN based Explanation Generator Model, Generate for classification models
Talk #2 - Understanding Models via Visualization, Attribution and Semantic Identification
Key Summary
- Exploring the Deep Networks Black box
- Deep Network interepreted as a sequence of function
Generating iconic example
- Inverting Layers
- Reconstruct the image pixel by pixel
- Find what parts of image are salient for deep network
- Finding Artifacts
- Sensitivity Analysis
Gradcam
Next List
Artificial Intelligence Imitation Learning - Tutorial - 2018 ICML
Faster R-CNN for Real-time Object Detection
Loss Functions for Regression and Classification
Tutorial on Generative adversarial networks - GANs as Learned Loss Functions
CS 188 | Introduction to Artificial Intelligence-Fall 2018
Happy Mastering DL!!!
No comments:
Post a Comment