"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

April 22, 2016

Day #17 - Python Basics

Happy Learning!!!

Neural Networks Basics


Notes from Session
  • Neurons - Synapses. Model brain at high level
  • Machine Learning  - Algorithms for classification and prediction
  • Mimic brain structure in technology
  • Recommender engines use neural networks
  • With more data we can increase accuracy of models
  • Linear Regression, y = mx + b. Fit data set with little error possible.
Neural Network
  • Equation starts from neuron
  • Multiply weights to inputs (Weights are coefficients)
  • Apply activation function (Depends on problem being solved)
Basic Structure
  • Input Layer
  • Hidden Layer (Multiple hidden layers) - Computation done @ hidden layer
  • Output Layer
  • Supervised learning (Train & Test)
  • Loss function determines how error looks like
  • Deep Learning - Automatic Feature Detection


Happy Learning!!!

April 14, 2016

Basics - SUPPORT VECTOR MACHINES

Good Reading from link

Key Notes
  • Allow non-linear decision boundaries
  • SVM - Out of box supervised learning technique
  • Feature Space - Finite dimensional vector space
  • Each dimension represents feature
  • Goal of SVN - Train a model that assigns unseen objects into particular category
  • Creates linear partition of feature space
  • Based on features it places above or below separation linear
  • No stochastic element involved (No involvement of any previous state status)
  • support vector classifiers or soft margin classifiers - allows some observations to be on in-correct side of hyperplane allowing soft margin
Advantage
  • High Dimensionality, Memory Efficiency, Versatility
Disadvantages
  • Non probabilistic
More Reads

Happy Learning!!!

Day #16 - Python Basics

Happy Learning!!!

April 10, 2016

Probability Tips

  • Discrete random variables are things we count
  • A discrete variable is a variable which can only take a countable number of values
  • Probability mass function (pmf) is a function that gives the probability that a discrete random variable is exactly equal to some value.
  • Continuous random variables are things we measure
  • A continuous random variable is a random variable where the data can take infinitely many values.
  • Probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value
  • Bernoulli process is a finite or infinite sequence of binary random variables
  • Markov Chain - stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event
Let's Continue Learning!!!

Day #15 - Data Science - Maths Basics

Day #15 - Mathematics Basics

Sets Basics
  • Cardinality - Number of distinct elements in Set (For a Finite Set)
  • For Real numbers cardinality infinite
Rational Numbers - Made of Ratio of two numbers
Fibonacci series was introduced in 1201 - Amazing :)

Functions
  • Represents relationship between mathematical variables
  • Spread of all possible output is called range
  • Function that maps from A to B. A is referred as (Domain), B is referred as co-domain
Matrix
  • Rows and columns define matrix 
  • 2D array of numbers 
  • Eigen Values - Scalars, Eigen Vector - Vectors special set of values associated with Matrix M
  • Eigen Vectors - Those directions remain unchanged by action of matrix M
  • Trace - Sum of diagonal elements
  • Rank of Matrix - Number of linearly independent vectors
Determinant
  • Can be computed only for square matrix
Vector
  • Vectors have magnitude, length and direction
  •  Magnitude and cost of angle will give you direction
  • Vector product non-commutative
  •  Dot product commutative
  •  Vector is linearly independent if none of vectors can be written as sum of multiple of other vectors



 Happy Learning!!!

April 09, 2016

April 07, 2016

Day #13 - Maths and Data Science

  • Recommender Systems - Pure matrix decomposition problem
  • Deep Learning - Matrix Calculus
  • Google Search - Page Rank, Social Media Graph Analysis - Eigen Decomposition
Happy Learning!!!

April 04, 2016

Ensemble



  • Combine many predictors and provide weighted average
  • Use single kind of learner but multiple instances
  • Collection of "Ok" predictors and combine them making them powerful
  • Learn Predictors and combine them using another new model 
  • One layer of predictors providing features for next layer 
Happy Learning!!!