In Tensorflow website, there is quite a bit of explanation for post-training quantization but there is not much on transfer learning. The sample shown on Coral website is using Tensorflow 1.x and requires to execute the transfer learning inside a docker.
In this blog post, I am going to demonstrate on how to perform post-training quantization using Tensorflow 2.0 for Mobilenet V1 and V2. All the steps can be performed on Colab notebook (thus making use of free GPU from Google, Thank you Google!!!). The steps are almost the same for both versions except at the base model I have changed the model. The tflite model is then converted to Edge TPU tflite model which can be used for realtime inferencing.
For both the models, I am using the flower dataset to perform the transfer learning. Readers can use this as a base for another class of classification. In the future blog post, I may try more advanced models such as Inception, Resnet etc. A lot depends on the Edge TPU compiler because the compiler must be able to compile the layers to be supported by Edge TPU.
Some observations:
The final TPU tflite model is smaller for Mobilenet V2. For V1 is about 3.6MB and V2 is about 2MB
The training accuracy is higher for V2 as compared to V1. For 20 epochs, for V1 is about 93% and V2 is about 97%
Looks like V2 is definitely an improvement over V1.
In this blog post, I am going to demonstrate on how to perform post-training quantization using Tensorflow 2.0 for Mobilenet V1 and V2. All the steps can be performed on Colab notebook (thus making use of free GPU from Google, Thank you Google!!!). The steps are almost the same for both versions except at the base model I have changed the model. The tflite model is then converted to Edge TPU tflite model which can be used for realtime inferencing.
For both the models, I am using the flower dataset to perform the transfer learning. Readers can use this as a base for another class of classification. In the future blog post, I may try more advanced models such as Inception, Resnet etc. A lot depends on the Edge TPU compiler because the compiler must be able to compile the layers to be supported by Edge TPU.
Some observations:
The final TPU tflite model is smaller for Mobilenet V2. For V1 is about 3.6MB and V2 is about 2MB
The training accuracy is higher for V2 as compared to V1. For 20 epochs, for V1 is about 93% and V2 is about 97%
Looks like V2 is definitely an improvement over V1.
First time reading this blog, thanks for sharing.
ReplyDeletestaganQeale Jessica Ellis download
ReplyDeletehttps://colab.research.google.com/drive/1F9gKLifUXO732BOud-WRBBS1J4boVKAC
link
click here
gapamilma