"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

October 30, 2019

Day #292 - Gstream on Windows10

Note - Do "complete" instead of "typical" for both cases for it to work

This link was useful to experiment and follow.

Step 1. Download installer1 from link

Step 2. Download installer2 from link

Install both of them in Windows, Goto Folder - F:\gstreamer\1.0\x86_64\bin

Step #3 - Command
gst-launch-1.0.exe -v ksvideosrc device-index=0 ! video/x-raw, format=YUY2, width=320, heigh=240, framerate=30/1, pixel-aspect-ratio=1/1 ! videoconvert ! autovideosink

Stream Output

Next Post Stream from rasberry pi to windows

Happy Learning!!!

October 29, 2019

Day #291 - Working with chatterbot - Windows 10

#pip install --ignore-installed PyYAML
#pip install chatterbot
#pip install chatterbot-corpus
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
from chatterbot.trainers import ListTrainer
welcomebot = ChatBot("Charlie")
#The current training method takes a list of statements that represent a conversation
initialconversation = [
"Hello",
"Hi there!",
"How are you doing?",
"I'm doing great.",
"That is good to hear",
"Thank you.",
"You're welcome."
]
welcomebottrainer = ListTrainer(welcomebot)
welcomebottrainer.train(initialconversation)
#The program selects the closest matching response by searching for the closest matching known statement that matches the input
response = welcomebot.get_response("Hi there")
print(response)
response = welcomebot.get_response("Hello")
print(response)
coursebot = ChatBot("Charlie")
coursebottrainer = ListTrainer(coursebot)
querycourses = [
"Share courses",
"We offer courses in AI, ML, Devops, Big Data",
"Great ",
"Share me AI Details",
"It has machine learning basics to advanced models",
"What about big data",
"hadoop, streaming tools and kafka",
"What about Devops",
"Docker, Kubernetics"
]
coursebottrainer.train(querycourses)
response = welcomebot.get_response("Share courses")
print(response)
response = welcomebot.get_response("Share me AI Details")
print(response)
paymentbot = ChatBot("Charlie")
paymentbottrainer = ListTrainer(paymentbot)
feescourses = [
"Devops Fees",
"Online 15K, Offline 10K",
"AI Feeds ",
"Online 25K, Offline 10K",
"Big Data Fees",
"Online 35K, Offline 10K",
]
paymentbottrainer.train(feescourses)
response = paymentbot.get_response("Devops Fees")
print(response)
response = paymentbot.get_response("Big Data Fees")
print(response)
view raw chattberbot.py hosted with ❤ by GitHub

Happy Learning!!!

Day #290 - Yolo from OpenCV DNN - Windows 10

OpenCV has examples to invoke Deep Network using readNet modules

Key Methods
  • cv.dnn.readNet - Load the network
  • cv.dnn.NMSBoxes - Non Max Suppression
  • cv.dnn.blobFromImage - Input for Deep Network
Steps

Package Install - pip install opencv-python
Step #1 - git clone https://github.com/opencv/opencv
Step #2 - Goto Path - opencv/opencv/tree/master/samples/dnn
Link https://docs.opencv.org/master/da/d9d/tutorial_dnn_yolo.html
Step #3 - Steps to Run
example_dnn_object_detection --config=[PATH-TO-DARKNET]/cfg/yolo.cfg --model=[PATH-TO-DARKNET]/yolo.weights --classes=object_detection_classes_pascal_voc.txt --width=416 --height=416 --scale=0.00392 --input=[PATH-TO-IMAGE-OR-VIDEO-FILE] --rgb
Step #4 - cd E:\Code_Repo\opencv\samples\dnn
Step #5 - For Image
python object_detection.py --config="E:\\Code_Repo\\darkflow\\cfg\\yolo.cfg" --model="E:\\Code_Repo\\darkflow\\yolov2.weights" --classes="E:\\Code_Repo\\darkflow\\cfg\\coco.names" --width=416 --height=416 --scale=0.00392 --input="E:\Car_Detection\Data1\\frame214000.jpg" --rgb
Step #6 - For Video
python object_detection.py --config="E:\\Code_Repo\\darkflow\\cfg\\yolo.cfg" --model="E:\\Code_Repo\\darkflow\\yolov2.weights" --classes="E:\\Code_Repo\\darkflow\\cfg\\coco.names" --width=416 --height=416 --scale=0.00392 --input="E:\Car_Detection\\ch10_20190304115220.mp4" --rgb
Another Good Ref - https://github.com/rdeepc/ExploreOpencvDnn
Results




Happy Learning!!!

October 25, 2019

Day #289 - Example code for Class Creation, Data Persistence, Email and Phone number Validation

Example code for Class Creation, Data Persistence, Email and Phone number validators

#pip install phonenumbers
#pip install validate_email
import phonenumbers
from phonenumbers import carrier
from phonenumbers.phonenumberutil import number_type
from validate_email import validate_email
import pickle
userdata = r'E:\Code_Repo\messaging\phonename.dictionary'
useremail = r'E:\Code_Repo\messaging\phoneemail.dictionary'
phonename = {}
phoneemail = {}
class Person:
def __init__(self,name,phonenumber,emailaddress):
#Constructor
self.name = name
self.phonenumber = phonenumber
self.emailaddress = emailaddress
def validate_phonenumber(self):
#Phone number validation
status = carrier._is_mobile(number_type(phonenumbers.parse(self.phonenumber)))
return status
def validate_email_addr(self):
#Email validation
is_valid = validate_email(self.emailaddress)
return is_valid
def Save_data():
guestuser1 =Person('Raj','+91-7406947660','siva@siva.com')
guestuser2 =Person('Raja','+91-7406947760','siva1@siva.com')
print(guestuser1.validate_email_addr())
print(guestuser1.validate_phonenumber())
#Save data
phonename[guestuser1.phonenumber] = guestuser1.name
phoneemail[guestuser1.emailaddress] = guestuser1.emailaddress
phonename[guestuser2.phonenumber] = guestuser2.name
phoneemail[guestuser2.emailaddress] = guestuser2.emailaddress
#Persist data
with open(userdata, 'wb') as config_dictionary_file:
pickle.dump(phonename, config_dictionary_file)
with open(useremail, 'wb') as config_dictionary_file:
pickle.dump(phoneemail, config_dictionary_file)
def Read_data():
#read data
with open(userdata, 'rb') as config_dictionary_file:
userdatavalues = pickle.load(config_dictionary_file)
for key,datavalues in userdatavalues.items():
print(key)
print(datavalues)
with open(useremail, 'rb') as config_dictionary_file:
useremailvalues = pickle.load(config_dictionary_file)
for key,datavalues in useremailvalues.items():
print(key)
print(datavalues)
Save_data()
Read_data()
Happy Learning!!!

October 24, 2019

Day #288 - Messaging using pyzmq

pip install pyzmq

Example - python usage

Output


Real world use case - imagezmq: Transporting OpenCV images

Happy Learning!!!

October 19, 2019

Day #287 - Dlib Custom Detector

Learning's
  • Aspect ratio needs to be maintained
  • Trained for only one type of Object (Vim Dishwasher)
  • Trained with just a few images 30 images of train and 5 images of test
  • Environment - Windows 10, After all the Setup and Steps, It would take ~45 mins to develop for label, training, and testing
Data Set
  • Custom Shelf and Vim dishwash Detection
Image Pre-requisites
  • I resized data to 512 x 512 format to be consistent
  • Place all images in train and test directory respectively before next steps
Label Training Data
  • Img Lab Installation - Refer previous post
  • Use Shift key to select rectangle
  • Use Alt D to delete region
  • Each tool has its own commands
Step 1 - Specify XML and image path
=====================================
E:\Code_Repo\dlib\tools\imglab\build\Release\imglab.exe -c E:\Code_Repo\dlib_obj_count\ShelfData\train\train.xml E:\Code_Repo\dlib_obj_count\ShelfData\train
Step 2
=======
cd E:\Code_Repo\dlib_obj_count
Step 3
=======
imglab.exe E:\Code_Repo\dlib_obj_count\ShelfData\train\train.xml



Label Testing Data

Step 1
========
E:\Code_Repo\dlib\tools\imglab\build\Release\imglab.exe -c E:\Code_Repo\dlib_obj_count\ShelfData\test\test.xml E:\Code_Repo\dlib_obj_count\ShelfData\test
Step 2
======
cd E:\Code_Repo\dlib_obj_count
Step 3
======
imglab.exe E:\Code_Repo\dlib_obj_count\ShelfData\test\test.xml

Training Code


#Base code - https://gist.github.com/atotto/c1ccbfa44ee70a476816f6389834945e
#Minor changes for my requirements
import os
import sys
import dlib
options = dlib.simple_object_detector_training_options()
options.add_left_right_image_flips = False
options.C = 5
options.num_threads = 2
options.be_verbose = True
training_xml_path = r'E:\Code_Repo\dlib_obj_count\ShelfData\train\train.xml'
testing_xml_path = r'E:\Code_Repo\dlib_obj_count\ShelfData\test\test.xml'
dlib.train_simple_object_detector(training_xml_path, r'E:\Code_Repo\dlib_obj_count\ShelfData\detector.svm', options)
print("")
print("Training accuracy")
print(dlib.test_simple_object_detector(training_xml_path, r'E:\Code_Repo\dlib_obj_count\ShelfData\detector.svm'))
print("Testing accuracy")
print(dlib.test_simple_object_detector(testing_xml_path, r'E:\Code_Repo\dlib_obj_count\ShelfData\detector.svm'))
view raw train_dlib.py hosted with ❤ by GitHub


Test Code

import dlib
import cv2
detector = dlib.simple_object_detector(r'E:\Code_Repo\dlib_obj_count\ShelfData\detector.svm')
filepath = r'E:\Code_Repo\dlib_obj_count\ShelfData\test\25.png'
img = cv2.imread(filepath,1)
dets = detector(img)
for d in dets:
cv2.rectangle(img, (d.left(), d.top()), (d.right(), d.bottom()), (0, 0, 255), 2)
# Display the resulting frame
cv2.imshow("frame",img)
cv2.waitKey(0)
# When everything done, release the capture
cv2.destroyAllWindows()
view raw test_dlib.py hosted with ❤ by GitHub


Result

Day #286 - Working with imglab Annotation Tool

Working with imglab annotation tool.


Step 1
======
git clone https://github.com/davisking/dlib
Step 2
=======
cd to tools/imglab
Step 3
=======
mkdir build
cd build
Step 4
=======
cmake ..
Step 5
======
cmake --build . --config Release
Step 6
======
cd E:\Code_Repo\dlib\tools\imglab\build
Step 7
======
E:\Code_Repo\dlib\tools\imglab\build\Release\imglab.exe
view raw imglab.txt hosted with ❤ by GitHub
Happy Learning!!!

October 17, 2019

Learning Moments

Last three-four days, I was breaking my head for a segmentation task. There were a ton of tutorials. Everywhere I pick a code and ended up not working. Finally, I managed to segment it. When we sit and learn alone defintely there will be moments of long failures. Do whatever you do with a bit of curiosity and interest. Learning is an ongoing habit. We are not used to proper learning with focus, attention, curiosity, passion, and experimentation.  

Happy Learning!!!

Day #285 - Experimenting with Unet Segmentation

U-Net

  • Symmetric U-Shape - Convolutions + Poolings
  • Up-Convolutions - Upsampled layers
  • Encoder / Decoder
  • Contraction / Expansion
  • Skip Connections to learn pixel information

There are a ton of tutorials out there but it takes time to find to what works for us :) in our environment. I was experimenting on u-net based segmentation past few days. I will share my learnings on what worked for me.

Step 1 - The initial image is

I am interested in segmenting the parts (products)

Step 2 - The first step is to resize the image into 256 x 256 dimension



Step 3 - The Next Step is to binarize the image

This is the source image. The target image is


Step 4 - Tool - I used paint 3D and white brush in it to segment the required parts for my need

Step 5 - Follow the steps and create the train and label (source and segmented image)

Step 6 - Train the model, Got the repo and customized it link

Step 7 - The predictions for the test image are




Next Demo


 Happy Learning!!!


October 15, 2019

Day #284 - OpenCV Error in Windows server 2012


  • Turn windows features on or off
  • Skip the roles screen and directly go to Feature screen
  • Select "Desktop Experience" under "User Interfaces and Infrastructure"

Think answer was useful link

Happy Learning!!!

October 11, 2019

Day #283 - Clustering to group similar Images


For large retail datasets, before object detection. Clustering becomes essential to focus on each cluster to take it forward. Today's post is clustering images into similar groups

  • Generate Feature Data based on VGG / Resnet
  • Cluster them using Kmeans
  • Result output to their respective cluster directory

#Base Code - https://medium.com/@franky07724_57962/using-keras-pre-trained-models-for-feature-extraction-in-image-clustering-a142c6cdf5b1
#Modified for our custom need
from keras.preprocessing import image
from keras.applications.vgg16 import VGG16
from keras.applications.resnet50 import ResNet50
import numpy as np
import os
from sklearn.cluster import KMeans
from keras.applications.resnet50 import preprocess_input, decode_predictions
import shutil
datadir = r'E:\Code_Repo\Images'
output_dir = r'E:\Code_Repo\results'
def createFolder(directory):
try:
if not os.path.exists(directory):
os.makedirs(directory)
except OSError:
print('Error: Creating directory. ' + directory)
def VGG_Cluster(numberofclusters):
feature_list = []
model = VGG16(weights='imagenet', include_top=False)
files = os.listdir(datadir)
for file in files:
img_path = datadir+ str('\\') + file
img = image.load_img(img_path, target_size=(224, 224))
img_data = image.img_to_array(img)
img_data = np.expand_dims(img_data, axis=0)
img_data = preprocess_input(img_data)
feature = model.predict(img_data)
feature_np = np.array(feature)
feature_list.append(feature_np.flatten())
feature_list_np = np.array(feature_list)
kmeans = KMeans(n_clusters=numberofclusters, random_state=0).fit(feature_list_np)
labelresult = kmeans.labels_
print(kmeans.labels_)
print(kmeans.cluster_centers_)
print('VGG Results')
#Create Directory based on number of clusters
for i in range(numberofclusters):
directoryname = output_dir + str('\\') + str(i) + str('\\')
createFolder(directoryname)
for i in range(len(files)):
img_path = datadir+ str('\\') + files[i]
#Copy image according to the directory
print(files[i])
shutil.copy(img_path, output_dir + '\\' + str(labelresult[i]) + '\\')
print(labelresult[i])
def Resnet_Cluster(numberofclusters):
feature_list = []
model = ResNet50(weights='imagenet', include_top=False)
files = os.listdir(datadir)
for file in files:
img_path = datadir+ str('\\') + file
img = image.load_img(img_path, target_size=(224, 224))
img_data = image.img_to_array(img)
img_data = np.expand_dims(img_data, axis=0)
img_data = preprocess_input(img_data)
feature = model.predict(img_data)
feature_np = np.array(feature)
feature_list.append(feature_np.flatten())
feature_list_np = np.array(feature_list)
kmeans = KMeans(n_clusters=numberofclusters, random_state=0).fit(feature_list_np)
labelresult = kmeans.labels_
print(kmeans.labels_)
print(kmeans.cluster_centers_)
print('Resnet Results')
#Create Directory based on number of clusters
for i in range(numberofclusters):
directoryname = output_dir + str('\\') + str(i) + str('\\')
createFolder(directoryname)
for i in range(len(files)):
#Copy image according to the directory
img_path = datadir+ str('\\') + files[i]
print(files[i])
shutil.copy(img_path, output_dir + '\\' + str(labelresult[i]) + '\\')
print(labelresult[i])
#VGG_Cluster(3)
Resnet_Cluster(3)

Input - Mixed Set of Images
Output 
Cluster 1
Cluster 2

Cluster 3
More Reads - Example (in R)

Happy Learning!!!

October 10, 2019

Day #282 - Retail Product Detection / Retail Object Detection


Paper #1 - Automatic Detection of Out-Of-Shelf Products in the Retail Sector Supply Chain

Rule-based information system
  • OOS Contribution factors - Measurement of product availability, Measurement of shelf availability
  • Approach - Radio-Frequency Identification based
  • Rule - “IF (a product is fast-moving) AND (has low sales volatility) AND (POS sales = 0 for today) THEN the product is OOS
  • IF Fastmoving product sales count is zero then there is a problem
  • Detection approach = Historical data -> Patterns -> Rules = Apply on Current Data
Paper #2 - Retail Shelf Analytics Through Image Processing and Deep Learning
Analysis
  • Tasks - Automatic product checkout using segmentation, Object detection of products on store shelves
  • Approach - Shelf Image -> Detector for regions (Class, BBox, Mask) -> Crop Each Region -> Object -> Feature Extractor -> KNN Classifier

Paper #3 - A deep learning pipeline for product recognition on store shelves
Analysis
  • Shelf Image -> Region proposals -> Crop -> Reference Images -> Refinement -> Detection

Paper #4 - Planogram Compliance Checking Based on Detection of Recurring Patterns
Analysis
  • Shelf Image -> Region Partition -> Recurring Pattern Detection -> Compliance Checking

High Level Recommendations (Apply Combination of techniques)
  • Shelf Image -> Region Partition -> Region proposals -> Detect Recurring Pattern, Reference Images for refinement -> Prediction
Project Analysis 
The Shelf Detector System For Retail Stores Using Object Detection

pip install -r requirements.txt
python train_obj_detector.py testNutella1

Code Details - https://github.com/bobquest33/dlib_obj_count/blob/master/nutella.pdf
Tool Used - https://imglab.in/

Interesting Read
Retail Product Recognition on Supermarket Shelves

Paper #5 - Rethinking Object Detection in Retail Stores
Key Notes
  • Simultaneously object localization and counting, abbreviated as Locount
  • Algorithms to localize groups of objects of interest with the number of instances
  • Most of the state-of-the-art object detectors use non-maximal suppression (NMS) to post-process object proposals to produce final detections
New Approach
  • Cascaded localization and counting network (CLCNet)
  • Localize groups of objects of interest with the numbers of instances
Dataset
  • Grozi-120 dataset
  • Freiburg Groceries dataset
  • GameStop dataset
  • Retail-121 dataset
  • Sku110k dataset
  • TGFS dataset
Cascade R-CNN [1] proposes a multi-stage object detection architecture, which is formed by a sequence of detectors trained with increasing IoU thresholds

Locount Dataset
  • 140 common commodities, including 9 big subclasses
  • Cascaded localization and counting network (CLCNet
    • count-regression strategy for counting
    • count-classification strategy for counting
  • Locount to localize groups of objects with the instance numbers, which is more practical in retail scenarios



Happy Learning!!!

October 09, 2019

Day #281 - Yolo based Object Counting and Duplicate Removal

Today's learning is Yolo based object counting, duplicate removal using intersection over union metric.

import cv2
import sys
import os
#Windows 10 Setup
from darkflow.net.build import TFNet
sys.path.append(r"E:\Code_Repo\darkflow\darkflow")
options = {
'model': 'E:\\Code_Repo\\darkflow\\cfg\\yolo.cfg',
'load': 'E:\\Code_Repo\\darkflow\\yolov2.weights',
'threshold': 0.14,
'gpu': 1.0
}
tfnet = TFNet(options)
#https://github.com/tejaslodaya/car-detection-yolo/blob/master/app_utils.py
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_area = (yi2 - yi1) * (xi2 - xi1)
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])
union_area = box1_area + box2_area - inter_area
# compute the IoU
iou = inter_area / union_area
return iou
def Object_Count_Yolo(imagefilepath):
coordinates = []
boxes = []
boxlabels = []
finalboxes = []
finalboxlabels = []
img = cv2.imread(imagefilepath, cv2.IMREAD_COLOR)
result = tfnet.return_predict(img)
for data in result:
coordinates = [data['topleft']['x'], data['topleft']['y'],data['bottomright']['x'], data['bottomright']['y']]
boxes.append(coordinates)
boxlabels.append(data['label'])
for k in range(0,len(boxes)):
selectflag = 0
for m in range(k+1,len(boxes)):
iouvalue = iou(boxes[k],boxes[m])
if iouvalue > .7:
selectflag = 1
if(selectflag==0):
finalboxes.append(boxes[k])
finalboxlabels.append(boxlabels[k])
return len(finalboxes)
def diffObjectCount(SourceImagepath, DestinationImagePath):
#Compute and Find Difference in Count
#Return %% based on Count Difference
SourceObjectCount = Object_Count_Yolo(SourceImagepath)
DestinationObjectCount = Object_Count_Yolo(DestinationImagePath)
print(SourceObjectCount)
print(DestinationObjectCount)
Happy Learning!!!

October 08, 2019

Day #280 - Human detection and Tracking

Project #1 - Human Detection and Tracking 

Overview
  • Detecting a human and its face in a given video and storing Local Binary Pattern Histogram
  • Recognize them in any other videos
  • Local Binary Pattern Histogram - type of visual descriptor, clockwise direction check neighbour values
Execution Steps
Clone the project
Step 1 - python create_face_model.py -i data
Step 2 - python main.py -v video

Project #2 - Person-Detection-and-Tracking  (Pending Execution)

Overview
  • The person detection in Real-time is done with the help of Single Shot MultiBox Detector
  • Single Shot MultiBox Detector
  • Tracking - Kalman Filter is fed with the velocity, position and direction of the person which helps it to predict the future location 
Single Shot MultiBox Detector
  • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps
  • The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs
Execution Steps
Clone the project https://github.com/ambakick/Person-Detection-and-Tracking

Execute - camera.py in Spyder

Project #3 - Tracker Types Demo Project (Pending Execution)

Overview
  • Track Multiple faces
  • Download and Experiment
Run Below Demos
demo - track multiple faces.py
Multiple_Trackers.py
face_eye.py
distance_to_camera.py

Datasets - Link

Object Motion Detection and Tracking for Video Surveillance
Measuring size and distance with OpenCV
Calculate X, Y, Z Real World Coordinates from Image Coordinates using OpenCV

Happy Learning!!!

October 07, 2019

Learning and Growth

Every job / role / growth is more about understanding the problem / perspective / business and technical dimensions. AI has a lot of tools, languages, architecture, research insights. It is ongoing learning to evaluate all possible solutions, productize with certain boundaries, keep learning to expand other architectures. Be open to learning, titles/roles are a cascade effect of your consistent efforts.

Keep Going!!!


October 03, 2019

Day #279- Multi-Object Tracking

Project #1 - vehicle-speed-check

Clone Repository - Link

On Anaconda prompt,
cd vehicle-speed-check
pip install -r requirements.txt
python speed_check.py

Comments - Very good project to get started. The logic of speed computation with respect to frames per second, pixel movement can be reused in other use cases. Use of dlib correlation tracker. The tracking logic can be reused in other similar implementation

Project #2 - Simple Example code (ROI Based)

#pip uninstall opencv-python
#pip uninstall opencv-contrib-python
#pip intall opencv-contrib-python.
#Learnt from https://www.learnopencv.com/multitracker-multiple-object-tracking-using-opencv-c-python/
#Rewrote based on my requirements
import cv2
import sys
# Read video
video = cv2.VideoCapture(r"E:\Code_Repo\vehicle-speed-check\cars.mp4")
#Create two trackers
trackers = []
tracker1 = cv2.TrackerKCF_create()
trackers.append(tracker1)
tracker2 = cv2.TrackerKCF_create()
trackers.append(tracker2)
tracker3 = cv2.TrackerKCF_create()
trackers.append(tracker3)
if not video.isOpened():
print("Could not open video")
sys.exit()
# Read first frame.
ok, frame1 = video.read()
height , width , layers = frame1.shape
new_h=int(height/2)
new_w=int(width/2)
frame = cv2.resize(frame1, (new_w, new_h))
if not ok:
print('Cannot read video file')
sys.exit()
bboxdata = []
#Select two regions to track
for i in range(3):
bbox = cv2.selectROI(frame, False)
bboxdata.append(bbox)
print(bbox)
print(bboxdata)
# Initialize tracker with first frame and bounding box
#tracker1 update
ok1 = trackers[0].init(frame, bboxdata[0])
#tracker2 update
ok2 = trackers[1].init(frame, bboxdata[1])
#tracker3 update
ok3 = trackers[2].init(frame, bboxdata[2])
flag = 0
count = 0
while True:
# Read a new frame
ok1, frame1 = video.read()
height , width , layers = frame1.shape
new_h=int(height/2)
new_w=int(width/2)
frame = cv2.resize(frame1, (new_w, new_h))
if not ok1:
break
i = 0
for tracker in trackers:
result, bbox = tracker.update(frame)
i = i + 1
if result:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else:
print('Tracking failure for Object ', i)
#remove from the list
trackers.remove(tracker)
#for every object that enters append to this
if flag==0:
bbox = cv2.selectROI(frame, False)
tracker = cv2.TrackerKCF_create()
tracker.init(frame, bbox)
trackers.append(tracker)
flag = 1
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27: break
cv2.destroyAllWindows()
print(trackers)


Project #3 - Another interesting project from Adrian blog 

Cloned the project and executed the demo. This code does not work in windows 10 though. Someone has fixed the code. The working code is in link 

python multi_object_tracking_fast.py --prototxt E:\Code_Repo\multiobject-tracking-dlib\mobilenet_ssd\MobileNetSSD_deploy.prototxt  --model E:\Code_Repo\multiobject-tracking-dlib\mobilenet_ssd\MobileNetSSD_deploy.caffemodel --video E:\Code_Repo\multiobject-tracking-dlib\race.mp4 --output E:\Code_Repo\multiobject-tracking-dlib\race_output_fast.avi

October 02, 2019

Day #278 - Object Tracking - TensorFlow Object Counting API

I came across this project. Fantastic work!! The Detection part needs to be finetuned for the Indian scenario, the tracking seems fine performing decently. You can spot a few false positives, Trucks on the other side of the lane are not detected, Indian Trucks are not well recognized. This can be handled by a custom detection model. Overall the tracking and counter approach can be reused in multiple scenarios.

Clone the project - object_counting_api

vehicle_counting.py - Executed this for some of my highway videos.

is_color_recognition_enabled = 1 # set it to 1 for enabling the color prediction for the detected objects
roi = 600 # roi line position
deviation = 2 # the constant that represents the object counting area
The objects passing through the line will be counted and incremented. Minor changes to roi

Output of the same



Happy Learning!!!

Analytics in Elections

If Analytics is used to target people, neutralize opinions, create digital impressions. This would lead to the creation of bias. This is not an ethical use of AI.

Analytics is needed to create affordable health care, forecast economy, provide good insights to develop people, economy jobs. AI should be used in the right sense. AI for politics will benefit in the short term but it is a curse in the long term.

When the truth is neutralized with biased facts the consequences of power with the wrong leaders will be a curse for the future generations

Keep Thinking!!!