"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

November 26, 2019

Amazon Go /Big Basket Smart Machine - Tech Analysis

Amazon Go
  • QR Code for user account linking
  • Pick items, Auto detected
  • Multiple validation points (Video Object Detection, Shelf Weight sensor-based confirmation, RFID reads etc)
  • Multiple RFID readers to capture item movement across ISLEs
Cons
  • Massive surveillance 
  • Real-time computation
IMO, Big Basket Smart Machine is also similar to the implementation
  • QR Code for user account linking
  • Pick and Buy
  • Mobile App integration 
  • Weight Sensors used to detect Shelf Item Quantity
  • Unique Items in each row. It is not mixed
Since its a standalone machine, it will not need any tracking with RFID / Video camera. Only Weight sensors sufficient and it is a single person operated at a time.

Happy Learning!!!

Day #299 - OpenVino Python Code for Faces - Pedestrain - Attributes

Hope this helps for other developers trying out executing intel openvino models in python

#https://github.com/wangxiao5791509/Pedestrian-Attribute-Recognition-Paper-List
import cv2
import numpy as np
import sys
import time
import os
FACE_DATA_DIR = '/home/upsquared/Documents/projects/Code/faces'
DATA_DIR = '/home/upsquared/Documents/projects/Code/samples'
RESULTS_DIR = '/home/upsquared/Documents/projects/Code/results'
ATTRIBUTES_RESULTS_DIR = '/home/upsquared/Documents/projects/Code/attribute_results'
def Detect_Faces():
attr_bin = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/face-detection-retail-0004/FP32/face-detection-retail-0004.bin'
attr_xml = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/face-detection-retail-0004/FP32/face-detection-retail-0004.xml'
ped_net = cv2.dnn.readNet(attr_bin, attr_xml)
ped_net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
print("Models loaded")
files = os.listdir(FACE_DATA_DIR)
#get all files in directory
#loop through for all files
i = 0
for file in files:
filepath = FACE_DATA_DIR + '//'+ file
print(filepath)
rawframe = cv2.imread(filepath)
frame = cv2.resize(rawframe, (300,300))
#https://docs.openvinotoolkit.org/latest/_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html
try:
blob = cv2.dnn.blobFromImage(frame,size=(300,300),ddepth=cv2.CV_8U)
ped_net.setInput(blob)
out = ped_net.forward()
predictions = []
for detection in out.reshape(-1,7):
image_id,label,conf,x_min,y_min,x_max,y_max = detection
print(conf)
#print(label)
if conf > 0.5:
predictions.append(detection)
#print(predictions)
print(len(predictions))
for detection in predictions:
confidence = float(detection[2])
xmin = int(detection[3]*frame.shape[1])
ymin = int(detection[4]*frame.shape[0])
xmax = int(detection[5]*frame.shape[1])
ymax = int(detection[6]*frame.shape[0])
print(xmin,ymin,xmax,ymax)
cv2.rectangle(frame,(xmin,ymin),(xmax,ymax),color=(0,255,0))
#Crop and Save
cv2.imshow("Result",frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
result_filepath = RESULTS_DIR + '//'+ str(i) + '.jpg'
#write the output in directory
cv2.imwrite(result_filepath,frame)
i = i+1
except:
print('Error')
print(frame)
pass
def Detect_Pedestrians_Adas():
attr_bin = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/pedestrian-detection-adas-0002/FP32/pedestrian-detection-adas-0002.bin'
attr_xml = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/pedestrian-detection-adas-0002/FP32/pedestrian-detection-adas-0002.xml'
ped_net = cv2.dnn.readNet(attr_bin, attr_xml)
ped_net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
print("Models loaded")
files = os.listdir(DATA_DIR)
#get all files in directory
#loop through for all files
i = 0
for file in files:
filepath = DATA_DIR + '//'+ file
print(filepath)
frame = cv2.imread(filepath)
#https://docs.openvinotoolkit.org/latest/_models_intel_pedestrian_detection_adas_0002_description_pedestrian_detection_adas_0002.html
blob = cv2.dnn.blobFromImage(frame,size=(672,384),ddepth=cv2.CV_8U)
ped_net.setInput(blob)
out = ped_net.forward()
predictions = []
for detection in out.reshape(-1,7):
image_id,label,conf,x_min,y_min,x_max,y_max = detection
if conf > 0.5:
predictions.append(detection)
#print(predictions)
print(len(predictions))
for detection in predictions:
confidence = float(detection[2])
xmin = int(detection[3]*frame.shape[1])
ymin = int(detection[4]*frame.shape[0])
xmax = int(detection[5]*frame.shape[1])
ymax = int(detection[6]*frame.shape[0])
print(xmin,ymin,xmax,ymax)
pedestrian = frame[ymin:ymax,xmin:xmax]
result_filepath = RESULTS_DIR + '//'+ str(i) + '.jpg'
#write the output in directory
cv2.imwrite(result_filepath,pedestrian)
cv2.rectangle(frame,(xmin,ymin),(xmax,ymax),color=(0,255,0))
i = i+1
cv2.imshow("Result",frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
def Detect_Pedestrians_Retail():
attr_bin = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/person-detection-retail-0013/FP32/person-detection-retail-0013.bin'
attr_xml = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/person-detection-retail-0013/FP32/person-detection-retail-0013.xml'
ped_net = cv2.dnn.readNet(attr_bin, attr_xml)
ped_net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
print("Models loaded")
files = os.listdir(DATA_DIR)
#get all files in directory
#loop through for all files
i = 0
for file in files:
filepath = DATA_DIR + '//'+ file
print(filepath)
frame = cv2.imread(filepath)
#https://docs.openvinotoolkit.org/latest/_models_intel_pedestrian_detection_adas_0002_description_pedestrian_detection_adas_0002.html
#blob = cv2.dnn.blobFromImage(frame,size=(384,672),ddepth=cv2.CV_8U)
#https://docs.openvinotoolkit.org/latest/_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html
blob = cv2.dnn.blobFromImage(frame,size=(544,320),ddepth=cv2.CV_8U)
ped_net.setInput(blob)
out = ped_net.forward()
predictions = []
for detection in out.reshape(-1,7):
image_id,label,conf,x_min,y_min,x_max,y_max = detection
#print(conf)
#print(label)
if conf > 0.5:
predictions.append(detection)
#print(predictions)
print(len(predictions))
for detection in predictions:
confidence = float(detection[2])
xmin = int(detection[3]*frame.shape[1])
ymin = int(detection[4]*frame.shape[0])
xmax = int(detection[5]*frame.shape[1])
ymax = int(detection[6]*frame.shape[0])
print(xmin,ymin,xmax,ymax)
pedestrian = frame[ymin:ymax,xmin:xmax]
result_filepath = RESULTS_DIR + '//'+ str(i) + '.jpg'
#write the output in directory
cv2.imwrite(result_filepath,pedestrian)
cv2.rectangle(frame,(xmin,ymin),(xmax,ymax),color=(0,255,0))
i = i+1
#Crop and Save
cv2.imshow("Result",frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
def Detect_Attributes():
attr_bin = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/person-attributes-recognition-crossroad-0200/FP32/person-attributes-recognition-crossroad-0200.bin'
attr_xml = '/opt/intel/computer_vision_sdk/deployment_tools/intel_models/person-attributes-recognition-crossroad-0200/FP32/person-attributes-recognition-crossroad-0200.xml'
ped_net = cv2.dnn.readNet(attr_bin, attr_xml)
ped_net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
print("Models loaded")
files = os.listdir(RESULTS_DIR)
#get all files in directory
#loop through for all files
for file in files:
try:
filepath = RESULTS_DIR + '//'+ file
print(filepath)
frame = cv2.imread(filepath,1)
#H = 160
#W = 80
h,w = frame.shape[:2]
if (w>=80 and h>=160):
#https://docs.openvinotoolkit.org/latest/_models_intel_person_attributes_recognition_crossroad_0230_description_person_attributes_recognition_crossroad_0230.html
blob = cv2.dnn.blobFromImage(frame,size=(80,160),ddepth=cv2.CV_8U)
ped_net.setInput(blob)
print("part 1")
out = ped_net.forward("453")
#print(out)
predictions = []
for detection in out.reshape(-1,8):
is_male, has_bag, has_backpack, has_hat, has_longsleeves, has_longpants, has_longhair, has_coat_jacket = detection
predictions.append(detection)
print(predictions)
#print(len(predictions))
for detection in predictions:
if(detection[0] > 0.5):
print('MALE')
else:
print('Female')
if(detection[1] > 0.5):
print('Has BAG')
if(detection[2] > 0.5):
print('has_backpack')
if(detection[3] > 0.5):
print('has_hat')
if(detection[4] > 0.5):
print('has_longsleeves')
if(detection[5] > 0.5):
print('has_longpants')
if(detection[6] > 0.5):
print('has_longhair')
if(detection[7] > 0.5):
print('has_coat_jacket')
print("part 2")
out1 = ped_net.forward("456")
print(out1)
predictions = []
for detection in out1.reshape(-1,2):
point_with_top_colorx, point_with_top_colory = detection
predictions.append(detection)
for detection in predictions:
print(detection[0])
print(detection[1])
print("part 3")
out2 = ped_net.forward("459")
print(out2)
predictions = []
for detection in out2.reshape(-1,2):
point_with_bottom_colorx, point_with_bottom_colory = detection
predictions.append(detection)
for detection in predictions:
print(detection[0])
print(detection[1])
except:
pass
#Detect_Faces()
#Detect_Pedestrians_Adas()
#Detect_Pedestrians_Retail()
Detect_Attributes()
Happy Learning!!!

November 24, 2019

Day #298 - Data Analysis of PNB Defaulters

Data Source - Link

Data Analysis of PNB Defaulters

Chart #1 - Top 20 States By Company Registration State and Defaulted Amount



Chart #2 - Top 20 States By Defaulters Count





Chart #3 - Top 20 Branches with Maximum Defaulters





Chart #4 - Top 20 Branches with Maximum Defaulters Loan Value





Possible Feature Variables
  1. Branch Related Approval Score, Higher Defaulters lower the rating
  2. Similar Industry Match Score
  3. State Related Scores
  4. Connections / Joint ventures in Past with Collapsed Companies
  5. Rules for threshold limit based on Industry / State / Branch
  6. Multiple Models for ongoing monitoring / performance / social medial trends etc..
  7. Build a global model with defaulter list across banks and identify common patterns/modus operandi
Happy Learning!!!

November 18, 2019

Day #297 - Paper Analysis - WIDER Face and Pedestrian Challenge

WIDER Face and Pedestrian Challenge
Tasks - face detection, pedestrian detection, person search
Dataset - WIDER Pedestrian Track - 20,000 images. From surveillance cameras, driving vehicles

Face Detection
  • Approach 1 -  single stage detector with the network structure based on RetinaNet [7] and FAN - Face attention network
  • Approach 2 - two-stage face detector following Faster R-CNN [12] and FPN Feature pyramid networks [13] framework
  • Approach 3 - a two-stage face detection framework. RetinaNet [7] and RefineDet [15]. The team uses two-stage classification and regression to improve the accuracy of classification
PEDESTRIAN DETECTION TRACK
  • Approach 1 - basic detection framework of the champion is Cascade R-CNN. Five models are ensembled: ResNet50 [18], DenseNet-161 [19], 197 SENet-154 [20] and two ResNext-101 [21] models.
  • Approach 2 - The second team uses FPN [13] and Faster R-CNN [12] as the basis of their detection framework
  • Approach 3 - The team at the third place uses Cascade R-CNN [16] as the detection framework
PERSON SEARCH TRACK
  • Approach 1 - The winning team designs a cascaded model that utilizes both face and body features for person search. (1) The face detector used here is MTCNN [26] trained on WIDER FACE [4]. (2) The face recognition model backbones include ResNet [18], InceptionResNet-v2 [27], DenseNet [19], DPN and MobiletNet [28]. (3) The Re-ID backbones include ResNet=50, ResNet-101, DenseNet-161 and DenseNet-201
  • Approach 2 - The solution is decomposed into two stages - the first stage is to retrieve faces, and the second stage is to retrieve the bodies. Finally, the retrieval results of the two stages are combined as the ranking result. (1) Face Detection. The face detector used here are PCN [29] and MTCNN [26]. (2) Face Retrieval. A second-order networks [30], [31], [32] (ResNet34 as backbone) trained on VGGFace2 [33] with softmax loss and ring loss [34] is used here
  • Approach 3 -  In the first step, the face in the query is used to search persons, whose faces can be detected, by face recognition. Then these images are further used to search again in all candidate images by person reidentification feature to get the final result
Happy Learning!!!


November 13, 2019

Day #296 - TensorflowLite

TensorflowLite models run in gmail, google photos, google assistant, google photos etc..

Advantages
  • Low Latency
  • No Data connection required
  • On device sensors
Key Points
  • On Device ML on many platforms
  • Tensorflow model saved in graph format
  • Converted to Lite format
TFLite
  • Model compression
  • Quantization
  • Optimized SIMD Kernels
  • Converter to generate model. Interpreter to run models
Benefits
  • Cross-Platform deployment
  • Inference speed increases
  • Binary size reduction
  • Hardware acceleration roadmap
#Build and save keras model
model = build_your_model()
tf.keras.experimental.export_saved_model(model,saved_model_dir)

#convert keras to tensorflow lite model
converter = tf.lite.TFLiteConverter.from_saved_models(saved_model_dir)
#To experiment new feature
converter.experimental_new_converter = True
tflite_model = converter.convert()

Link1, Link2

Improve Performance of models
  • Reduce precision of weights (16 bit instead of 32 bit precision) - Quantization
  • Pruning - Remove connections during training
  • Op Kernels - ARM
  • Delegates - GPU (Run on Specialized hardware)




Tensorflow Lite on Micro-controllers is an impressive move. More than mobiles, cross platforms this is a very impressive step.

Happy Learning!!!

November 11, 2019

Getting new ideas perspectives

Some meaningful tips for new ideas, solutions, creative thinking




1. Learn a lot of facts
2. Have deep knowledge of the background material
3. Spend time thinking about the problem
4. A curiosity about fundamental characteristics – what makes things tick (big picture and not details)
5. Strong drive to want to find out the answers
6. Pick your work colleagues based on their ability to help simulate good ideas
7. Narrow the scope of the problem if you are not making progress
8. Moving to a new situation will allow you to change your behavior since there aren’t preconceived expectations of your behavior.
9. Similarity to a known problem (experience helps)
10. Structural analysis (break problem into pieces); solve a microproblem first and then build up – divide & conquer
11. Ask conceptual questions about everyday things
12. Simplify and deep dive, Spend a lot of time reading
13. Work in isolation before participating in a group
14. You got to be a learning machine to improve your thinking
15. Drive Decision based on facts, behavior, intuition, apply lessons learned in the past
https://spinlab.me/2017/11/19/isaac-asimov-asks-how-do-people-get-new-ideas/
https://spinlab.me/2017/07/29/r-w-hamming-on-creativity/
https://spinlab.me/2017/07/30/claude-shannons-1952-lecture-on-creative-thinking/
https://qr.ae/TWgnaR
Another Interesting Read - Idea Generation - https://blog.samaltman.com/idea-generation
Good Environment to discuss ideas
Optimistic people
Think without the constraints
Good feel for the future
"Stay away from people who are world-weary and belittle your ambitions"
"You want to be able to project yourself 20 years into the future, and then think backwards from there. Trust yourself—20 years is a long time; it’s ok if your ideas about it seem pretty radical."
"Finally, a good test for an idea is if you can articulate why most people think it’s a bad idea, but you understand what makes it good"
Technical Debt and Product Success - https://medium.com/@romanpichler/technical-debt-and-product-success-42ec1c5718a7
view raw NewIdeas.txt hosted with ❤ by GitHub
Happy Learning!!!

November 05, 2019

Day #295 - Age - Emotion - Gender Detection Model

It seems I am aging faster than ever.


Deep Learning Model for Age, Gender, Emotion and Real-time implementation. It seems I am agining faster. If it says I am 40, I have just a decade left to code and transition to something else.

Years progressed so fast seems already aged. I hope to code and do something till my day of death. Keep Going...

Happy Learning!!!

November 04, 2019

Day #294 - Setting up Home Surveillance System

Finally, after a few months, I was able to set up a Home Surveillance System. Person Detection and Real-time alert.

Installation


Demo

Happy Learning!!!

November 01, 2019

Day #293 - Date with RASA - Chatbot Learning day :)

Found an interesting workshop on end to end demo with RASA.

Training



Demo


#https://www.youtube.com/watch?v=xu6D_vLP5vY
#https://github.com/JustinaPetr/Weatherbot_Tutorial
Rasa based chatbot
Step 1 - Pre-Requisites
========================
1. Clone Project https://github.com/JustinaPetr/Weatherbot_Tutorial
2. Install Requirements from FULL Code Directory
cd E:\Code_Repo\Weatherbot_Tutorial\Full_Code
pip install -r requirements.txt
3. Download English Spacy model - To parse and get necessary information
python -m spacy download en
4. Install npm with node.js. https://www.npmjs.com/get-npm
https://nodejs.org/dist/v12.13.0/node-v12.13.0-x64.msi
5. In New Terminal
npm i -g rasa-nlu-trainer
Data Annotation - rasa nlu trainer
Deconstruct into Intent, Entities
Intent - What it is about
Entity - Location, Place, Object in Discussion
Example Messages, Alongside intents, Entities
Examples
Greeting
GoodBye
Asking
Step 2 - Data Annotation
============================
Step #1 - File data.json
{
"rasa_nlu_data":{
"common_examples":[
{
"text":"Hello"
"intent":"Greet",
"entities":[]
}
{
"text":"goodbye"
"intent":"goodbye",
"entities":[]
}
]
}
Step #2 - Launch the trainer in Anaconda console
1. Goto Location E:\Code_Repo\Weatherbot_Tutorial\Full_Code_Latest>
2. Run Command rasa-nlu-trainer
3. Custom adding intent and examples
4. All additional examples present in git code in updated Data.json file
Step #3 - Train model
=======================
1. Configuration File
- Provide parameters
- Pipeline - Feature extractors to fetch messages
- Model save path
- Data path for annotated data
{
"pipeline":"spacy_sklearn",
"path":"./models/nlu",
"data":"./data/data.json"
}
config_spacy.json file
2. nlu_model.py file for script for model training
#import libraries
from rasa_nlu.converters import load_data
#load configuration files
from rasa_nlu.config import RasaNLUConfig
#load trainer
from rasa_nlu.model import Trainer
def train_nlu(data,config,model_sir):
training_data = load_data(data)
trainer = Trainer(RasaNLUConfig(config))
trainer.train(training_data)
model_directory = trainer.persist(model_dir,fixed_model_name='weathernlu')
if __name__=='__main__':
train_nlu('./data/data.json','config_spacy.json','./models/nlu'
#Run this to train the model
#Models created in folder directory
2. Code to test with additional code in nlu_model.py
#import libraries
from rasa_nlu.converters import load_data
#load configuration files
from rasa_nlu.config import RasaNLUConfig
#load trainer
from rasa_nlu.model import Trainer
from rasa_nlu.model import Metadata, Interpreter
def train_nlu(data,config,model_sir):
training_data = load_data(data)
trainer = Trainer(RasaNLUConfig(config))
trainer.train(training_data)
model_directory = trainer.persist(model_dir,fixed_model_name='weathernlu')
def run_nlu():
interpreter = interpreter.load('./models/nlu/default/weathernlu',RasaNLUConfig('config_spacy.json'))
#load the model
print(interpreter.parse(u"I am planning my holiday to barcelona, I wounder what is the weather out there"))
if __name__=='__main__':
run_nlu()
3. Changes to run for custom packages (Code will run in these versions)
pip install rasa_core==0.10.3
pip install rasa-nlu==0.11.5
Rerun - nlu_model.py file
Step #4 - Building the conversation
======================================
1. Dialogue management will predict action. Domain file. It is yml file.
2. Key parts are
slots - placeholders for context of conversation,
intents - ,
entities - ,
templates - ,
actions -
3. All details used for predictions
slot and entities have same attributes - Observations
template - text responses for users (multiple answers)
weather_domain.yml
slots:
location:
type:text
intents:
- greet
- goodbye
- inform
entities:
- location
templates:
utter-greet:
- 'Hello, How can i help?'
utter-goodbye:
- 'ttyl'
utter_ask_location:
-'In what location?'
actions:
- utter_greet
- utter_goodbye
- utter_ask_location
- actions.ActionWeather
Step #5 - Custom Action creation file
========================================
actions.py
from __future__import absolute_import
from __future__import division
from __future__import unicode_literals
from rasa_core.actions.action import Action
from rasa_core.events import SlotSet
class ActionWeather(Action):
def name(self):
return 'action_weather'
def run(self,dispatcher,tracker,domain):
from apixu.client import ApixuClient
api_key = ""
#Authentication
client = ApixuClient(api_key)
loc = tracker.get_slot('location')
current = client.getCurrentWeather(q=loc)
#parse and extract required details
country = current['location']['country']
city = current['location']['name']
condition = current['current']['condition']['text']
temperature_c = current['current']['temp_c']
humidity = current['current']['humidity']
wind_mph = current['current']['wind_mph']
response = """It is currently {} in {} at the moment. {} {} and wind {}""".format(condition,city,temperature,humidity,wind_mph)
dispatcher.utter_message(response)
#custom action
return [SlotSet('location',loc)]
#file updated in actions weather_domain.yml
Step #6 - Story formation
==========================
New file Stories.md markdown file in data folder
Stories.md
==========
#story 01
* greet
- utter_greet
## story 02
* goodbye
- utter_goodbye
##story 03
* inform
- utter_ask_location
##story 04
* inform
- action_weather
Step #7
========
Start online session
using train_init.py and train_online.py
train_init.py
- train dialogue management model
- agent used to train model
- keras polcies used to train model
- data file path
- augmentation factor to add more stories
- Save model with persist function
pip install rasa-nlu==0.13.1
Modified train_init.py
=======================
Step #8
========
train_online.py
- import libraries
- parser to parse extract features
- load model
Retrain Model
python -m rasa_core.train -s data/stories.md -d weather_domain.yml -o models/dialogue --epochs 300
Step #9
=======
Run online Training
python -m rasa_nlu.train -c nlu_model_config.yml --fixed_model_name current --data ./data/nlu.md --path models/ --project nlu
Step #10
=========
dialogue_management_model.py
Final code to demo
Summary
========
- Install requirements from FULL Code only
- Run code nlu_model.py train block
- Run code train_init.py
- Run code train_online.py (Actual Conversations with chatbot)
- Action_Listen (Wait for input)
- Experimented the greet - ask - response - quit workflow
- Run demo, dialogue_management_model.py
Next Reads
https://towardsdatascience.com/create-chatbot-using-rasa-part-1-67f68e89ddad
https://medium.com/analytics-vidhya/learn-how-to-build-and-deploy-a-chatbot-in-minutes-using-rasa-5787fe9cce19
https://forum.rasa.com/t/what-is-the-recommended-setup-for-production-deployemnt/1882
view raw rasabot.txt hosted with ❤ by GitHub

Happy Learning!!!