In Tensorflow website, there is quite a bit of explanation for post-training quantization but there is not much on transfer learning. The sample shown on Coral website is using Tensorflow 1.x and requires to execute the transfer learning inside a docker. In this blog post, I am going to demonstrate on how to perform post-training quantization using Tensorflow 2.0 for Mobilenet V1 and V2. All the steps can be performed on Colab notebook (thus making use of free GPU from Google, Thank you Google!!!). The steps are almost the same for both versions except at the base model I have changed the model. The tflite model is then converted to Edge TPU tflite model which can be used for realtime inferencing. For both the models, I am using the flower dataset to perform the transfer learning. Readers can use this as a base for another class of classification. In the future blog post, I may try more advanced models such as Inception, Resnet etc. A lot depends on the Edge TPU compiler because t
I have created a Colab notebook to perform transfer learning using Mobilenetv1 and then converts the model from h5 to tflite and then to kmodel. The kmodel file can be downloaded the Maixpy Go for realtime image classification. The youtube video for the flower classification can be found here .