"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

January 09, 2019

Day #184 - Interpretable Machine Learning for Computer Vision - Part I

Part I - Key Lessons
  • ML is powerful hammer to solve lot of problems
  • Complicated for humans to understand
  • Building Linear classifier on hundred features



Burning Questions
  • Decision Trees are interpretable
  • As Data point grows it becomes even more difficult
  • Rule list approach / Rule set approach
  • No One size fits all method

  • Reflect Fairness
  • Reflect Experience
Where you don't need interepretability
  • No Significant consequences
  • Sufficiently studied problem
Interpretability Methods
  • Before building model we can improve interpretability
  • Other options - While bulding or after its built


Interepretability Options
  • Before Building Model - Exploratory Data Analysis - Visualization - Exploratory Analysis - Mean, STD, K-Means, KNN - Facets - Visualization Tool
  • Building a New Model - What is medium and constraints we use to explain - Rules, Examples, Sparsity and Monotonicity, Learn function for target variable
  • After building a model - Ablation test, Input-feature importance, Concept importance 
Ablation test - Train without feature and see the impact
Fit Linear functions - Put Approximations / first derivatives
Local explanations - Fitting function locally true (About one data point)
Experiment - Saliency maps look similar after randomizing weights (Paper - Sanity Checks for Saliency Maps)

Evaluate Interpretability Methods
  • Do human experiments
  • Formulate experiment where you have ground truth
t-SNE - Understand Vision Models
  • PCA Representation of Pixel values (Maximum variations captured)
  • PCA learns linear mapping, preserve large pairwise distances
  • KL divergence preserves the local data structure





Next Part II

Happy Mastering DL!!!

No comments: