"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

May 07, 2022

Loss Functions - Deep Learning

The choice of loss depends on the desired output (e.g., classification vs. regression)

Regression Loss Functions

  • Mean Squared Error Loss
  • L1 Loss
  • L2 Loss
  • Mean Squared Logarithmic Error Loss
  • Mean Absolute Error Loss

L2 Norm, mean squared error. Mean Squared Error - The mean square error is probably straight forward. You take the difference of the result and the ground truth for this sample and square it.

The L1 loss is basically the Absolut value of the difference between the current sample’s actual output and the desired output.

Binary Classification Loss Functions

  • Binary Cross-Entropy
  • Hinge Loss
  • Squared Hinge Loss

Multi-class Classification Loss Functions

  • Multi-class Cross Entropy Loss
  • Sparse Multiclass Cross-Entropy Loss
  • Kullback Leibler Divergence Loss

The Negative log-likelihood loss is based on the idea that every output represents a likelihood for example a particular class. It aims to make the output for the correct class has high as possible and for others as small as possible.

Cross entropy loss - The cross entropy loss is very popular for classification problems. The losses are averaged across observations for each minibatch

Kullback-Leibler Divergence Loss - Measures distance between distributions

Ref - Link1, Link2

Keep Thinking!!!

No comments: