"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

December 15, 2022

Edge Devices Notes

Paper - DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices*

  • Raspberry Pi
  • Nvidia’s Jetson Nano
  • Asus Tinker Edge R
  • Raspberry Pi 4
  • Google Coral Dev Board

Nvidia Jetson Nano:

  • Nvidia deploys their Graphics Processing Unit (GPU) modules to the Edge with accelerated AI performance
  • The Jetson Nano’s GPU is based on the Maxwell microarchitecture (GM20B) and comes with one streaming multiprocessor (SM) with 128 CUDA cores
  • Compared to the other target edge devices in this work, the Jetson Nano stands out with a fully utilizable GPU

Google Coral Dev Board

  • The Google Coral Dev Board is one of the offerings which features the “edge” version of the TPU (tensor processing unit)
  • To make use of the dedicated unit, models in a supported format can be converted to work with the PyCoral and Tensorflow Lite framework

DeepEdgeBench Testing Framework

Raspberry Pi 4:

  • 4 comes with Gigabit Ethernet, along with onboard wireless networking and Bluetooth.
  • 4th generation is available in multiple variants with different RAM sizes (2, 4 and 8GB)

Tensorflow Lite: TensorFlow Lite was developed to run machine learning models on microcontrollers and other Internet of Things (IoT) devices with only few kilobytes of memory

TensorRT: NVIDIA TensorRT [36] is a C++ library that facilitates high performance inferencing on NVIDIA graphics processing units (GPUs)

Tensorflow: TensorFlow [24] is an open-source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them.


Test Scenarios

  • Average time spent on inference of one of 5,000 images
  • Total time spent on inference of 5,000 images
  • Power during idle state (LAN on and off)
  • Average power during inference of 1,000 x 5 images
  • Total power consumption during inference of 1,000 x 5 images
  • Accuracy for each platform-device combination

Extend the FFmpeg Framework to Analyze Media Content

ML with Intel OpenVINO Toolkit — Super-Resolution

Keep Exploring!!

No comments: