"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

November 30, 2018

Day #156 - Reinforcement Learning

"Rewards for right moves, Starve for wrong moves"

Key Summary
  • Intelligence Systems Stack
  • Agents to Effectors
  • Raw Data - Features - Gain Knowledge - Reason - Short term and Long Term Actions
  • Sensory Data - Create Representations
  • Raw Sensory Data - Feature Learning (Higher Order Representations) - Extract Actionable usable Knowledge
  • Supervised learning - Memorizers
  • Reinforcement learning - brute force reasoning
  • Reinforcement learning components (Goal - State - Actions - Reward)
Step 1 - Reinforcement Learning Stack


Step 2 - Data Sources

Step 3 - Feature Extraction


Step 4 - Representations


Step 5 - Reasoning

Step 6 - Actions


Types of Deep Learning

Reinforcement Learning Components

Learning States Logic



Markov Decision Process
  • State - Action - Reward - State
  • Policy - Behavior function
  • Value Function - How good is state / function
  • Model - Agents representation of Environment
  • Stochastic System (having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely)
  • Reward structure changes the next step strategy
  • Encourage Exploration with positive reward
  • Goal is to Optimize reward
Summary
Intelligence - Ability to accomplish complex goals
Understanding - Ability to turn complex information into simple, useful information

DQN - Deep Q Learning
  • Neural Network injected into Q
  • Q function injected into Neural Network
  • Deep Mind uses DQN
  • Greedy way pick the best action
Policy Gradients
  • DQN - Q Learning - Off Policy
  • Policy Gradient - Directly optimizing policy space
DeepStack
  • To beat poker players
"Deep Learning for Perception tasks but not for forming actions"

Happy Mastering DL!!!

Deep Learning - Concept Summary

  • Deep Learning - "Learn directly from data without manual feature engineering"
  • Sequence Models - "Preserve history along with current state for next state prediction"
  • Reinforcement Learning - "Rewards for right moves, Starve for wrong moves"
  • Deep Generative Models - "Fake it Until It can't figure out it is Fake"
Happy Mastering DL!!!

November 28, 2018

Day #155 - Deep Generative Models

"Fake it Until It can't figureout it is Fake"
  • Train Neural Networks from Training Examples for Sample Generations from training data 
  • Generative Models for Outlier Detection
  • Neural Machine Translations
  • Generative networks support Reinforcement Learning for Robotics
Kinds of Generative Models
  • Autoregressive models - Deep NADE, PixelRNN, WaveNet, Video Pixel Network
  • Latent Variable Models - Variational Auto Encoders, General Adverserial Networks
Latent Variable Models
  • Latent variables that represent variations in data
  • They move the data (Smile appearance, Illumination)
  • Find variables that give variations in data
Variational Autoencoder
  • Latent Variable Models
  • Model discovered independent variables causing variations in data
  • Some distribution over data, maximize the likelihood
  • Posterior of Z given X
  • Includes Encoder + Decoder + Regularization of Posterior to look like prior
  • GAN - Generator (Prepares data to fool discriminator), Discriminator - (Difference between true data and fake data done by generator)
  • CGAN, Least Squares GAN
  • Cycle Consistent Adverserial Networks
What makes GAN Special ?
  • Image manifold is complicated non-linearity
  • We do random sampling
  • Maxlikelihood (certain density for every sample it provides)





Happy Mastering DL!!!

Youtube - Please Improve your recommendations

With a lot of social media, new data. It's a lot of data duplication in Youtube recommendations, The recommendations are not contextual, time relevant, emphasis on emotional wellness

  • Data Duplication - News from one channel followed by same news reported from other mediums. The data is the same but why would we need to revisit the same new again
  • Repeat of News with Same Situations Involved (Aircraft Accident follows a list of other accident news in history). News with Negative emotions or More news and noise could be controlled or promote tips for Air Safety  
  • Promote Heterogenous mix - (We listen to songs / tech talks / motivation videos / news-trends) - Recommendations depend on day of week / time of day / place. It has to be more contextual than just what type of videos I saw recently
  • Impact on children - I have observed my niece spending tons on time on toys, cars etc. Learning has to be both online and offline. Time moves without any alert. Rather being glued its better we reduce/encourage better content that promotes creativity, questioning, thinking. Children don't know what's best to them, parents often feel giving smartphones makes them calm down
  • Emotional Wellness - There has to be a human touch/impact in every aspect of life. We are all tracked by our android phones. They know more about our patterns/actions. It can better understand our highs / low points of life than just making us download/watch more bits and bytes. Recommendations based on our lifestyle/commute / recommending start times / proactive to alert based on our previous activities. 
After two years. Today read this article . Key lessons listed below
  • ‘successful’ recommendations is watch time, providing compelling recommendations
  • Great for a company trying to sell ads but waste of time for consumer
  • YouTube’s interest to keep us watching for as long as possible.
Happy Thinking!!!

November 27, 2018

Pentaho Kettle SQL JDBC Connectivity Issues

1. Download Kettle from link
2. Download JDBC Driver from - link, Extract it to local folder (Install JDK)
3. Copy SQLJDBC driver to - E:\pdi-ce-8.1.0.0-365\data-integration\lib
4. Test DB Connectivity in Kettle tasks
5. Sample Table Input



Very Different that SSIS :) :)

Happy Learning!!!

November 24, 2018

Day #154 - CNN - Class Notes

Key Summary
  • 540 million years to trace evolutions of vision
  • Human vision is trained for 540 million years
  • Hierarchy of layers in our vision are involved in processing
What Computers see
  • Images are numbers
  • Pixes represented by 2D array of numbers
  • RGB (3D Array)
  • Computer vision Tasks
  • Regression (Output takes a continuous value)
  • Classification (Single Class label)
  • Detect presence of features in particular image
Manual Feature Extraction
  • Domain Knowledge
  • Define features
  • Detect features and classify
Image Challenges
  • Occlusion
  • Viewpoint variation
  • Scale variation
  • Deformation
  • Background Clutter
  • Intra Class variation
  • Illumination Conditions
Neural Networks
  • Learn directly from Image data
  • Low Level (Edge / Dark Spots)
  • Mid Level (eyes, Ears, Nose)
  • High Level (Facial Structures)
Fully Connected Neural Network
  • Multiple Hidden Layers
  • Input 2D Image (Vector of pixel values)
  • All spatial information will be lost
  • Connect neuron in hidden layer to all neurons in input layer
  • Slide patch window across the image, this considers spatial structure
  • Apply set of weights to extract local features
  • Multiple filters and multiple set of weights
  • Patchy Operation known as convolution
Feature Extraction and Convolution
  • Convolution preserves spatial relationship between pixels
  • Elementwise multiplication between patch and filters
  • Different filters for Sharpening, Edge
  • Use multiple filters to extract different features



CNNs for Classification
  • Convolution - Apply filter with learned weights to generate feature maps
  • Non-Linearity - Often Relu (Image data highly non-linear)
  • Pooling - Downsampling for each feature map
  • Train model to learn weights
  • Each Neuron sees patch of inputs
  • Apply matrix of weights for elementwise multiplication
  • depth = number of filters
  • Relu - Pixel by pixel operation that replaces all negative values by zero (Non-Linear operation)
  • Pooling - Reduce dimensionality preserve spatial invariance (Downsampling operations)
  • Layer operations to learn hierarchy of features
  • Feature Learning Pipeline + Performing Classification
Imagenet CNN
  • 14 million Images
  • 21,841 categories
  • Deeper Network vs How deep we can go
Architecture for Applications
  • New architecture beyond Feature Learning
  • Semantic Segmentation (Fully Convolutional Network) - Downsampling and Upsampling operations, Driving Scene Segmentation, Encoder-Decoder
  • Object Detection - Region Proposals / Classify them, Really long time to compute
  • Image Captioning - Generate Semantic Content - Remove Fully Connected layer and replace them with RNN
  • CNN feature Layer + RNN (Trained to predict words that describe the image)
  • CAM (Class Activation Map)

Happy Mastering DL!!!

November 23, 2018

Day #153 - Sequence Modelling of Neural Networks

  • Sequence modelling in google translations
  • Self parking car - Sequence modelling
Challenges
  • Sequence modelling - predict the next word
  • ML Models are not designed for sequences
  • FFN specifies size of input at outside (Fixed)
  • Sequences are variable length inputs
  • Use all information available in sequence and also fixed length vector
  • bow (bag of words), Each slot represents word and number is occurences of it, Vector size remains same
  • Sequential information lost in bow
  • Preserve sequence but also maintain length
To Model Sequences
  • Deal with variable length sequences
  • Maintain Sequence Order
  • Keep track of long term dependencies
  • Share parmeters across the sequence
RNN (Recurrent Neural Network)
  • Architected same as NN
  • Each Hidden unit is using slightly different function
  • HU - Function of input from its own previous output (Cell State)
  • HU - Input + Previous Cell State = New input at timestamp
  • Parameter sharing is taken care
  • Sn - contain information of all past timestamps
  • Solves long term dependencies
Train RNN
  • Similar to NN
  • Backpropagation through time (GD - Take derivative of loss with respect to each parameter, Shift parameters in opposite direction to minimise loss)
  • Loss at each time step, Total loss = sum of loss at every time step
  • Backpropagation through time
  • Vanishing Gradient problems - By time stamp increase gradient becomes longer and longer
Methods to Address Bias in RNN
  • Activation functions (RELU, tanh, Sigmoid)
  • Initializing weights to something like identity matrix (prevent shrinking product)
  • Add more complex cells (Gated Cell)
  • RNN vs LSTM, GRU
  • Long Short Term Memory (Keep Memory Unchanged for many time steps)
LSTMs Overview
  • 3 Step process
  • Step 1 - Forget irrelevant part of previous states (Remember Gate)
  • Step 2 - Selectively update cell States (seperate from whats outputted)
  • Step 3 - Output Certain parts of cell state  
  • 3 Steps implemented using Logic Gates
  • Logic gates implemented using Sigmoid functions
  • Update happens through additive function
  • Final Cell State Summarizes all information from the sequence
  • Music generation using RNN
  • Machine Translation (Two RNN side by side Encoder / Decoder)
  • Final cell state is passed, Decoder figures out and produces in different language
  • With Attention in Machine Translation we take weighted sum of all previous cell states



Happy Mastering DL!!!

November 22, 2018

Day #152 - MIT 6.S191: Introduction to Deep Learning - Class 1

Getting back into Another DL Course along with Code Examples.

Summary of Class Session
  • ML Algos extract features
  • DL learn features directly from data than engineered by human, Automatically learn from Data
Timeline of DL Concepts
1952 - Stochastic Gradient Descent
1958 - Perceptron - Learable Weights
1986 - Backpropagation - Multi-Layer Perceptron
1995 - Deep Convolutional CNN
  • Perceptron - Single Neuron in Neural Network, Forward propagation of Information in Neural Network, Non-Linear Activation Function
  • Activation Function - Sigmoid - Input real number transform output between 0 and 1, Produce Probability output ( > .5, < .5)
  • "The purpose of Activation functions is to introduce non-linearities in the network"
  • Linear functions produce linear decisions no matter of network size
  • Non-Linearities allow us to approximate arbitrary complex functions
  • Dot Product, Bias, Non-Linearity
  • Inputs - Hidden Layers (States) - Outputs
  • Connected - Every Node in one layer connected to node in another layer
  • Objective Function (Cost Function, Emprical Loss) - Measures total loss over entire dataset
  • Cross Entropy Loss can be used with models that output a probability  between 0 and 1
  • Mean Squared error loss can be used with regression models that output continuous real numbers
  • Loss Optimization - Minimise Loss over entire training set
  • Loss Landscape is Convex, Finding True Global minima is difficult
  • Stable learning rates will converge smoothly to global minima
  • Learning Rates (Momentum. Adagrad, Adadelta, Adam, RMSProp)
  • Mini-batches lead to faster learning
  • Generalize well on unseen data
  • Regularization - Way to discourage models becoming too complex (Dropouts - On every iteration randomly drop some proportion of hidden neurons, Discourages memorization)







Happy Mastering DL!!!

November 19, 2018

Day #151 - Back to Basics - Geoff Hinton Papers

Paper 1 - Learning Representations by back propagating errors (1986)

Key Summary
  • The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between actual output and desired output
  • Ability to create new distinguishing features
  • The aim is to find the set of weights that ensure that for each input vector the output vector produced by the network is same as the desired output vector
  • The drawback in learning procedure is that the error surface may contain local minima so that gradient descent is not guaranteed to find a global minimum
Paper 2 - Deep learning (2015)

Key Summary

Deep Learning
  • Machine Learning systems are used to identify objects in images, transcribe speech into text, match new items, posts or products with user interests and relevant results of search
  • Multiple processing layers to learn representations of data with multiple levels of abstraction
  • Recurrent Networks for sequential data such as text and speech
  • Deep Learning methods are representation learning methods with multiple levels of representation obtained by composing non-linear models that transform representation at abstract level
  • The layers are learned from data by general purpose learning procedure
  • The conventional option is hand design good feature extractors which require a considerable amount of engineering skill and domain expertise. Key advantage of deep learning is learn automatically using general purpose learning procedure
  • A deep learning architecture is a multistack layer of simple modules, all of which may compute simple non-linear input-output mappings
  • The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multi-layer stack of module is nothing more than a practical application of chain rule of derivatives
Convolutional Neural Networks
  • Composed of Convolutional layers and pooling layers
  • Units in convolutional layer organized into feature maps
  • Filtering operation performed by feature map is a discrete convolution
  • Pooling computes maximum of local patches
  • Two or three stages of convolution, non-linearity and pooling are stacked up, followed by more convolutional and fully connected layer
Recurrent Neural Networks
  • RNN process an input sequence one element at a time, maintaining their hidden units as state vector (history of past sequences)
  • Good at predicting next word in a sequence
Paper #3 - Dropout: A Simple Way to Prevent Neural Networks from Overfitting
Key Summary 
  • Randomly drop units from neural network during training
  • Dropping out units hidden and visible in a neural network
  • Temporarily remove from network along with incoming and outgoing connections

Key Summary
  • Long Short Term Memory - RNN Architecture
  • RNN are deep in time, Since their hidden state is a function of all previous hiddem states
  • Make use of previous context
  • Deep birectional LSTM RNNs for speech recognition
LSTM Components
  • Input gate
  • Forget gate
  • Output Gate
  • Cell Activation Vectors
Bidirectional RNN has
  • Forward Hidden Sequence
  • Backward Hidden Sequence
CTC - Connnectionist Temporal Classification
  • Uses Softmax layer to define a seperate output distribution
  • CTC uses forward - backward algorithm to sum over all the possible alignments and determine the normalised probability
  • RNN trained with CTC are bi-directional

Brain creates internal representations to learn without any explicit instructions
  • ANN are modern neurons
  • Behavior of ANN depends on weights, activation functions
  • Backpropagation algorithm to train the neural network
Backpropagation Challenges
  • Requires labeled training data
  • Forward Pass - Signal = Activity = y
  • Backward Pass - Signal = dE/dy
  • Learning alters the shape of search space and provides good evolutionary path
  • Learning organisms evolve much faster
Key Summary
  • Interaction between learning and evolution was proposed by Baldwin
  • Learning alters search space in which evolution operates
  • Inspired by Theory of natural evolution
  • Motivated by Darwinian Theory
Unimodal vs Multimodal
  • A landscape is unimodal if it has single minimum
  • Multimodal if it has several minima with equal function values
More Papers - Link

Happy Learning!!!!

November 18, 2018

Day #150- Gabor filter

What is Gabor filter ?
  • Linear filter used for texture analysis
  • Gabor filter allow a certain band of frequency and reject the others
What are its Significance ?
  • Edges and texture changes captured
  • Filters are convolved with signal and Gabor space is obtained
  • 2D Gabor filter is Gaussian Kernel modulated by a sinusoidal plane wave in spatial domain
Example Code ?


This also can be applied for feature extraction from images.

Ref - Link1 , Link2

Happy Learning!!!

November 14, 2018

Day #149 - Thoughts on Multi Object Classification for Retail Store

We cannot classify all the million objects in Retail Store with a Single Model. We need a mix of different approaches to Detect, extract, Classify and Identify.
  • Yolo for bounding boxes and object boundaries
  • Model to Detect Humans in Picture
  • High-level category classification (Bags, Dresses, Groceries)
  • Models for Individual product level Identification (Nike, Puma, American Tourister Bags)
Data preparation depends on Lighting, Environment Factors, We need to use the Surveillance, existing video setup to leverage them for Dataset prpreparationContinually evaluate and re-label false positives. Also, Add Data Augmentation critical to improving on algorithm accuracy.

Next Level Challenges are
  • Object Tracking between frames
  • Object Occlusion
  • Counting and Tracking of Items
The Data Sources / Factors for Billing Items Counting are
  • Timeframe of transaction
  • Distinct Objects in the timeframe
  • Duplicate Objects in a single frame
  • Totally we need to have Distinct Object Type and Values, Unique Object Count
Data Issues While Training / Testing
  • Class Imbalance
  • Projection of camera and angle between training and test images
  • Discarding frames with multiple products as (Others)
  • Worked on Re-training dataset dozen times to get 80+ accuracy using Random Forest Model
  • Ensemble techniques to arrive at multiple predictions and considering voting majority
Improving Model Accuracy
  • Ensemble Models
  • Voting based classifiers
  • Use Adaboost / XGBoost
More Techniques
  • Leverage Yolo
  • Try Both Contour Detection Techniques
  • Try with White background (Contrast Improve)
Setting up a Model for Retail Environment
  • Automate Data collection
  • Duplicate Yolo with Retail Objects
  • First of the kind to come up with Retail Model 
  • Keep Objects with a boxed structure / white backgroud
  • Generic to customer / POS Checkoout
  • Yoflow already tensorflow implementation available
  • https://github.com/johnwlambert/YoloTensorFlow229
#LearningContinues

November 12, 2018

AI for Internet Policing

Given the massive data growth, social media there is a lot of data out there. Internet largely is un-managed platform with both good / bad data available for all ages and groups.

Online censorship / monitoring / hate speech / dark web / restricted content has always been a point of discussion vs privacy concerns.

Privacy - There is no such 100% privacy in internet. Your social media, browsing pattern, buying pattern everything is somewhere stored in bits and bytes. Our deep desires, searches, keywords everything is ingrained in the web

Web Addiction - A lot of young population life goes around in web for games / social media / whatsapp / facebook / instagram. However educational content is available from many MOOC courses. Still the hours spent on non-productive things are way too much compared to efforts and hours spent.

Free Data / Low cost Smartphones - Almost every household and every member has a smart phone. Tons of apps, videos, dub-smash, social media a lot of time spent only on internet. Our social circle is limited only to our mobile phones

Data Consumption vs Productivity - From the volume of Data consumed / Time spent Vs positive Impact on the person. Duplication of News, Likes of events / actions, Discussions / Debates. Emotional wellness how much does it help to have a positive impact. There is no central monitor to alert / recommend / supervise our actions. We are responsible for our lives.

AI could potentially monitor / recommend / alert
  • Recommendations for Children
  • Recommendation based on Gender
  • Recommendation based on Criminal Background
  • Censorship based on browsing history (Alert proactively)
  • Excess usage / Depression / Suicide Tendency Detection
  • Emotional Wellness Monitor
  • Abuse Detection and Prevention
This is always a debate on privacy vs censorship. Another question is how do we prepare the kids who are going to be the future generation. Alt least minimally there need to be censorship for the young generation.

What we achieve depends on what we do today. AI could effectively applied for Internet Censorship and Monitoring

More Read - How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

This needs a lot of #opensource #crowdsource data, central monitoring, #datalabelling, #NLP, #Video Analytics to arrive at use cases / monitoring. 

#Mythoughts

AI in Education


Use cases for AI in Education Sector
  • Interactive Sentiment Analysis of Class Room Discussion based on Voice
  • Attention Analysis
  • Drowsiness Detection
  • Face based Attendance 
  • Loitering Detection
  • Arms / Knife / Banned items detection / Alert
  • Monitor Chats / Conversations for Depression
  • Distraction Alertness
  • Intrusion Detection / Monitoring
  • Detect Crowds / Fights
  • Health Analytics for Games /Fitness
  • Teaching Method vs Performance, Video Analytics to identify insights
  • Emotion Analytics (Happiness / Sadness / Anger )
Happy Learning!!!

Day #148 - One Pager Summary - Neural Networks

One page summary for my reference, Similar to a cheat sheet modified with key points from several sources.




Ref  - Link1 , Link2

Happy Learning!!!

November 07, 2018

Day # 147 - Part II - Deep Learning techniques for Computer Vision applied to embedded systems

A very interesting Final Year Paper - Deep Learning techniques for Computer Vision applied to embedded systems

Part Two Series.

Creating a custom Object Detector Machines

Steps Involved
  • Dataset Preperation (Download images using - Fatkun Batch Download Images)
  • Label Images by Hand (Painful process) - RectLabel Tool for manually labelling
  • Convert into .tfrecords - Custom Tensorflow code to prepare .tfrecords
  • Create labels with .pbtxt format
  • Create bounding boxes
  • Set TF Object Detection API
  • Create Pipeline for Training - Configure model, train_config, train_input_header, eval_config, eval_input_reader
  • Perform Training
  • Monitor Performance
  • Export Graph
  • Compile for Vision Bonnet
  • Deploy and Test
This is the first and most exhaustive step-by-step documentation neatly mentioned.

Happy Learning!!!

November 06, 2018

Day # 146 - Part I - Deep Learning techniques for Computer Vision applied to embedded systems

A very interesting Final Year Paper - Deep Learning techniques for Computer Vision applied to embedded systems

Key points I loved in this paper, Re-posted from the paper. Very Good ML training and learning paper. Excellent Work.


Machine Learning Problems
  • Classification - Train from a labelled dataset, Classify new incoming data to the class it belongs to. SVM, Decision Trees, Neural Networks, K Nearest Neighbors. Works on Discrete values
  • Clustering - Grouping data that share similar characteristics. Data not labelled. Maximum Distance between clusters, Minimize distance between points in identified cluster. K-Means, Hierarchical Clustering, DBScan
  • Regression - Considers continuous variable as output. Map input function to continuous output variable
  • PCA - Principal Component Analysis. Exploit Matrix Decomposition, Eigen Values to retain principal Eigen Vectors, Reduce dimension retaining critical components
  • Artificial Neural Network - Feedforward because output goes to next layer. Fully Connected - Each neuron propagates the result of computation to next neuron in following layer. Feed Forward + Fully Connected = Multi Layer Perceptron
Key Layers of Neural Network Design
  • Activation Functions to use in Each Layer
  • Loss Function to minimise Overfitting
  • Backpropagation Algorithm to find right weights (CNN)
  • Backpropagation Algorithm uses Stochastic Gradient Descent to compute Learning Rate
Computer Vision Applications
  • Image Classification - Assigning class / label based on pretrained classes. 
  • Image Classification and Localisation - Finding most relevant object in given image and bounding box of the relevant object in given image
  • Object Detection - Extract Relevant Object and their location
  • Instance Segmentation - Creates Overlap of detected objects/contours from extracted image. 
Deep Architecture
R-CNN - Region CNN
  • First step is identify regions
  • Second Step use CNN for identification
  • Not suitable for real time applications
  • Fast R-CNN, Improvement of R-CNN
Yolo
  • Single Neural Network Applied to entire image
  • You Only Look Once
  • Bounding box created with probabilities containing the object
  • Uses Predefined Grid Cells
SSD
  • Single Shot Multi Box Detector
  • Speeds up processing by Eliminating RPN
  • Feature maps extracted and Convolution filter is applied
For Real-time processing Yolo - 45 FPS, SSD 59 FPS. 

Happy Learning!!!

November 02, 2018

Computer Vision - Learning OpenCV

Outline of Exercises to understand basic image manipulations using OpenCV, available packages
  • Day #1 - Basic Image Manipulations (Flip / Rotate / Blur)
  • Day #2 - Image Sharpening, Edge Detection, Sobel, Laplacian Filters, SIFT
  • Day #3 - Contour Detection, Haar Face Detection, Haar Eye Detection, HOG Based Person Detection
  • Day #4 - OCR Detection from Image, Working with Tesseract
  • Day #5 - PCA on Image - Dimensionality Reduction, Split into Channels RGB, HSV
  • Day #6 - Working with Videos, Converting from Videos to Frames in OpenCV
#OpenCVGuidance

OpenCV Techniques for Feature Engineering
Perform below operations, Normalize and Convert into 1D array to train ML Model
  • Edge Detection - Canny Edge, Hough Transform
  • Image Sharpening, Threshold, Dialation, Erosion
  • Filters - Sobel, Laplace, Texture
  • Histogram Equalzation
  • Segmentation, Contours, HSV
References
OpenCV Examples
Snake Game in OpenCV
Record Specific Window in OpenCV
Add Image in Live Camera Feed
Object Tracking with Colors
OCR and OpenCV

Happy Learning!!!

Banking Analytics Use Cases

Banking Analytics Use Cases


Happy Learning!!!

Analytics for Textile Domain

For Textiles (Analytics)
  • Similar Fabrics Real time 
  • Quality Assessment
  • Seasonality Demand Forecasting
  • Pricing Recommendation
  • Surge based dynamic pricing recommendations
  • Region based recommendation
  • Gender / Trend / Location / Religion Personalized recommendations
  • Video Based Analysis
Sources of Data (Data Pipeline)
  • Images
  • Sensor Data
  • Data collected from Suppliers
  • Social Media
  • Fashion Trends
Dashboards / Reporting (Know how your business works)
  • Current/ Monthly / Seasonality
  • Small / Medium / Clustered Segments
  • Pricing / Quality / Demand KPIs
Happy Learning!!!