- ML is powerful hammer to solve lot of problems
- Complicated for humans to understand
- Building Linear classifier on hundred features
Burning Questions
- Decision Trees are interpretable
- As Data point grows it becomes even more difficult
- Rule list approach / Rule set approach
- No One size fits all method
- Reflect Fairness
- Reflect Experience
- No Significant consequences
- Sufficiently studied problem
- Before building model we can improve interpretability
- Other options - While bulding or after its built
Interepretability Options
- Before Building Model - Exploratory Data Analysis - Visualization - Exploratory Analysis - Mean, STD, K-Means, KNN - Facets - Visualization Tool
- Building a New Model - What is medium and constraints we use to explain - Rules, Examples, Sparsity and Monotonicity, Learn function for target variable
- After building a model - Ablation test, Input-feature importance, Concept importance
Fit Linear functions - Put Approximations / first derivatives
Local explanations - Fitting function locally true (About one data point)
Experiment - Saliency maps look similar after randomizing weights (Paper - Sanity Checks for Saliency Maps)
Evaluate Interpretability Methods
- Do human experiments
- Formulate experiment where you have ground truth
- PCA Representation of Pixel values (Maximum variations captured)
- PCA learns linear mapping, preserve large pairwise distances
- KL divergence preserves the local data structure
Next Part II
Happy Mastering DL!!!
No comments:
Post a Comment