{"id":17724,"date":"2021-09-13T06:00:00","date_gmt":"2021-09-13T00:30:00","guid":{"rendered":"https:\/\/debuggercafe.com\/?p=17724"},"modified":"2024-09-15T20:25:49","modified_gmt":"2024-09-15T14:55:49","slug":"image-classification-using-tensorflow-pretrained-models","status":"publish","type":"post","link":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/","title":{"rendered":"Image Classification using TensorFlow Pretrained Models"},"content":{"rendered":"\n<p>In this tutorial, you will be learning how to carry out <strong><em>image classification using pretrained models in TensorFlow.<\/em><\/strong><\/p>\n\n\n\n<div class=\"wp-block-button is-style-outline center\"><a data-sumome-listbuilder-id=\"2b425a93-e14e-4cb3-a989-ab5cc118eaf5\" class=\"wp-block-button__link has-black-color has-luminous-vivid-orange-background-color has-text-color has-background\"><b>Download the Source Code for this Tutorial<\/b><\/a><\/div>\n\n\n\n<p>This is the seventh post in the series, <strong><em>Getting Started with TensorFlow.<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/debuggercafe.com\/introduction-to-tensors-in-tensorflow\/\" target=\"_blank\" rel=\"noreferrer noopener\">Introduction to Tensors in TensorFlow<\/a><\/strong>.<\/li>\n\n\n\n<li><a href=\"https:\/\/debuggercafe.com\/basics-of-tensorflow-gradienttape\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Basics of TensorFlow GradientTape<\/strong><\/a>.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/debuggercafe.com\/linear-regression-using-tensorflow-gradienttape\/\" target=\"_blank\" rel=\"noreferrer noopener\">Linear Regression using TensorFlow GradientTape<\/a><\/strong>.<\/li>\n\n\n\n<li><a href=\"https:\/\/debuggercafe.com\/training-your-first-neural-network-in-tensorflow\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Training Your First Neural Network in TensorFlow<\/strong><\/a>.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/debuggercafe.com\/convolutional-neural-network-in-tensorflow\/\" target=\"_blank\" rel=\"noreferrer noopener\">Convolutional Neural Network in TensorFlow<\/a><\/strong>.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-on-custom-dataset\/\" target=\"_blank\" rel=\"noreferrer noopener\">Image Classification using TensorFlow on Custom Dataset<\/a><\/strong>.<\/li>\n\n\n\n<li>Image Classification using TensorFlow Pretrained Models.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>If you are new to TensorFlow, going through the previous posts will surely help you. In fact, starting from the <strong><a href=\"https:\/\/debuggercafe.com\/introduction-to-tensors-in-tensorflow\/\" target=\"_blank\" rel=\"noreferrer noopener\">first post<\/a><\/strong> is an even better idea. I am sure that you will be able to learn a lot. You will learn about tensors in TensorFlow, how to represent tensors with different ranks, and even carrying out tensor operations on the GPU.<\/p>\n\n\n\n<p><strong><em>In this post, we will cover the following topics.<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>We will start with a brief introduction to the models that we will use for image classification.<\/em>\n<ul class=\"wp-block-list\">\n<li><em>ResNet.<\/em><\/li>\n\n\n\n<li><em>VGG.<\/em><\/li>\n\n\n\n<li><em>MobileNet.<\/em><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><em>Then we will start the coding part of the tutorial where we will use all three of these models for image classification.<\/em><\/li>\n\n\n\n<li><em>Finally, we will end with what we learn in this post and how to take this learning one step further.<\/em><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>I hope that you are excited to follow along with the tutorial. Let&#8217;s jump into it then, without any more delay.<\/p>\n\n\n\n<p>First, we will discuss ResNets, VGG nets, and MobileNet architectures in short. Then we will move over to the coding part of the tutorial.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">VGG<\/h2>\n\n\n\n<p>The VGG models were introduced in the paper <strong><a href=\"https:\/\/arxiv.org\/pdf\/1409.1556v6.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Very Deep Convolutional Networks for Large-Scale Image Recognition<\/a><\/strong>&nbsp;by Karen Simonyan and Andrew Zisserman.<\/p>\n\n\n\n<p>At the time, the VGG models were state-of-the-art for image classification tasks. The models performed really well in the ImageNet 2014 challenge securing first and second places in localization and classification tasks respectively.<\/p>\n\n\n\n<p>This shows that VGG models were not only suitable for image classification but also for object detection.<\/p>\n\n\n\n<p>In the paper, the authors introduced not one but 6 different models.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/different-vgg-models..png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"732\" height=\"737\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/different-vgg-models..png\" alt=\"Different VGG models.\" class=\"wp-image-17785\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/different-vgg-models..png 732w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/different-vgg-models.-298x300.png 298w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/different-vgg-models.-150x150.png 150w\" sizes=\"auto, (max-width: 732px) 100vw, 732px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 1. Different VGG model architectures (<a href=\"https:\/\/arxiv.org\/pdf\/1409.1556v6.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>We can see there are 6 models, starting from VGG11 to VGG19. The number in the name indicates the number of weight layers present in that particular model.<\/p>\n\n\n\n<p>As we can see, all the model architectures are pretty simple. The models are simple stacking of 3&#215;3 convolutional layers, max-pooling layers, and fully connected layers, followed by a final softmax output.<\/p>\n\n\n\n<p>The only difference in all the models is only in the convolutional layers. All the models have three fully connected layers with the final one having 1000 units. This 1000 corresponds to the 1000 classes in the ImageNet classification dataset. Needless to say, we can easily modify the final fully connected layer according to our own requirements also.<\/p>\n\n\n\n<p>Please consider the following tutorials if you want to learn the practical implementation of VGG models using PyTorch.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-debuggercafe wp-block-embed-debuggercafe\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"lTm7TfOgmD\"><a href=\"https:\/\/debuggercafe.com\/implementing-vgg11-from-scratch-using-pytorch\/\">Implementing VGG11 from Scratch using PyTorch<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" title=\"&#8220;Implementing VGG11 from Scratch using PyTorch&#8221; &#8212; DebuggerCafe\" src=\"https:\/\/debuggercafe.com\/implementing-vgg11-from-scratch-using-pytorch\/embed\/#?secret=7nFalUQVk4#?secret=lTm7TfOgmD\" data-secret=\"lTm7TfOgmD\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-debuggercafe wp-block-embed-debuggercafe\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"zI28woJ3ll\"><a href=\"https:\/\/debuggercafe.com\/training-vgg11-from-scratch-using-pytorch\/\">Training VGG11 from Scratch using PyTorch<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" title=\"&#8220;Training VGG11 from Scratch using PyTorch&#8221; &#8212; DebuggerCafe\" src=\"https:\/\/debuggercafe.com\/training-vgg11-from-scratch-using-pytorch\/embed\/#?secret=FLvMbTq2Wr#?secret=zI28woJ3ll\" data-secret=\"zI28woJ3ll\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-debuggercafe wp-block-embed-debuggercafe\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"6zsa1oViK5\"><a href=\"https:\/\/debuggercafe.com\/implementing-vgg-neural-networks-in-a-generalized-manner-using-pytorch\/\">Implementing VGG Neural Networks in a Generalized Manner using PyTorch<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" title=\"&#8220;Implementing VGG Neural Networks in a Generalized Manner using PyTorch&#8221; &#8212; DebuggerCafe\" src=\"https:\/\/debuggercafe.com\/implementing-vgg-neural-networks-in-a-generalized-manner-using-pytorch\/embed\/#?secret=kcjrxWkLCL#?secret=6zsa1oViK5\" data-secret=\"6zsa1oViK5\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">ResNet<\/h2>\n\n\n\n<p>Residual Neural Networks or ResNets first came into the picture through the paper&nbsp;<strong><a href=\"https:\/\/arxiv.org\/pdf\/1512.03385v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Deep Residual Learning for Image Recognition<\/a><\/strong> by Kaiming He et al.<\/p>\n\n\n\n<p>ResNets helped to mitigate some of the pressing problems when training deep neural networks. Like:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>The saturation of accuracy after training for a few epochs.<\/em><\/li>\n\n\n\n<li><em>Problem of vainshing gradients.<\/em><\/li>\n\n\n\n<li><em>Increase in training error after training for certain epochs and reaching a certain accuracy.<\/em><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>Along with the above, the Residual Neural Networks also helped solved a few other problems as well.<\/p>\n\n\n\n<p><strong><em>But how did the authors achieve this?<\/em><\/strong> Instead of just stacking one neural network layer after the other, they chose a different path. The authors used:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>Skip connections.<\/em><\/li>\n\n\n\n<li><em>Identity mapping.<\/em><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>The authors used skip connections and identity mapping to come up with a residual learning block.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/residual-learning-block.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"363\" height=\"195\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/residual-learning-block.png\" alt=\"Residual learning block.\" class=\"wp-image-17788\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/residual-learning-block.png 363w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/residual-learning-block-300x161.png 300w\" sizes=\"auto, (max-width: 363px) 100vw, 363px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 2. Residual learning block of the ResNet model (<a href=\"https:\/\/arxiv.org\/pdf\/1512.03385v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>Residual Neural Networks apply identity mapping between layers to achieve short connections. The blocks which consist of these connections are known as the residual learning blocks.<\/p>\n\n\n\n<p>In fact, the authors came up with five different ResNet architectures namely:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ResNet18.<\/li>\n\n\n\n<li>ResNet34.<\/li>\n\n\n\n<li>ResNet50.<\/li>\n\n\n\n<li>ResNet101.<\/li>\n\n\n\n<li>ResNet152.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet-model-architectures.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1170\" height=\"511\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet-model-architectures.png\" alt=\"Different ResNet model architectures.\" class=\"wp-image-17790\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet-model-architectures.png 1170w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet-model-architectures-300x131.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet-model-architectures-768x335.png 768w\" sizes=\"auto, (max-width: 1170px) 100vw, 1170px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 3. Different ResNet model architectures (<a href=\"https:\/\/arxiv.org\/pdf\/1512.03385v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>Also, the ResNet architectures were able to beat the previous simple neural networks like VGG networks in terms of accuracy when trained on the ImageNet dataset.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/reset101-vs-vgg16-pascal-voc-results.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"644\" height=\"196\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/reset101-vs-vgg16-pascal-voc-results.png\" alt=\"ResNet101 and VGG16 object detection results.\" class=\"wp-image-17792\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/reset101-vs-vgg16-pascal-voc-results.png 644w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/reset101-vs-vgg16-pascal-voc-results-300x91.png 300w\" sizes=\"auto, (max-width: 644px) 100vw, 644px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 4. ResNet101 and VGG16 object detection results (<a href=\"https:\/\/arxiv.org\/pdf\/1512.03385v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>Along with classification tasks, the Residual Neural Network models also have good generalization performance on object detection tasks. They are a very good choice of backbone for state-of-the-art object detection models like Faster RCNN. <\/p>\n\n\n\n<p>If you want to learn about ResNet in detail and know more about the paper, then consider going through the following post.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-debuggercafe wp-block-embed-debuggercafe\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"nS3ze80q6f\"><a href=\"https:\/\/debuggercafe.com\/residual-neural-networks-resnets-paper-explanation\/\">Residual Neural Networks &#8211; ResNets: Paper Explanation<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" title=\"&#8220;Residual Neural Networks &#8211; ResNets: Paper Explanation&#8221; &#8212; DebuggerCafe\" src=\"https:\/\/debuggercafe.com\/residual-neural-networks-resnets-paper-explanation\/embed\/#?secret=SgbHXtJi8a#?secret=nS3ze80q6f\" data-secret=\"nS3ze80q6f\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">MobileNets<\/h2>\n\n\n\n<p>Neural networks models like VGG nets and ResNets are quite large and deep. They run quite well on devices having good compute resources. And GPUs are quite capable of that.<\/p>\n\n\n\n<p>But we also need models which are less compute-intensive and run on mobile and embedded devices.<\/p>\n\n\n\n<p>MobileNets, introduced in the paper <strong><a href=\"https:\/\/arxiv.org\/pdf\/1704.04861.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications<\/a><\/strong> by Howard et al. are just perfect for that. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/application-of-mobilenets.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1254\" height=\"499\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/application-of-mobilenets.png\" alt=\"Applications of MobileNets.\" class=\"wp-image-17796\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/application-of-mobilenets.png 1254w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/application-of-mobilenets-300x119.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/application-of-mobilenets-768x306.png 768w\" sizes=\"auto, (max-width: 1254px) 100vw, 1254px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 5. Applications of MobileNets (<a href=\"https:\/\/arxiv.org\/pdf\/1704.04861.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>MobileNets use depthwise separable convolution that leads to lightweight yet deep neural networks.<\/p>\n\n\n\n<p>Although many new lightweight neural networks have come up since the first publication of MobileNets. Still, we can say that they are some of the first models catering to deep learning in computer vision for mobile devices which led to further improvement.<\/p>\n\n\n\n<p>MobileNets are quite effective for a wide range of vision applications that benefit from deep learning and neural networks. The applications range from object detection to image classification, to face applications, and even large-scale geo-localization.<\/p>\n\n\n\n<p>The two methods which make MobileNets so effective are Depthwise convolution and pointwise convolution.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/depthwisw-and-poitwise-convolution-in-mobilenets.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"618\" height=\"777\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/depthwisw-and-poitwise-convolution-in-mobilenets.png\" alt=\"Depthwise and pointwise convolution in MobileNets.\" class=\"wp-image-17798\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/depthwisw-and-poitwise-convolution-in-mobilenets.png 618w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/depthwisw-and-poitwise-convolution-in-mobilenets-239x300.png 239w\" sizes=\"auto, (max-width: 618px) 100vw, 618px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 6. Depthwise and pointwise convolution in MobileNets (<a href=\"https:\/\/arxiv.org\/pdf\/1704.04861.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a>).<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>We will not dive into the details of the paper or the approach here. This will require a separate post to give proper justice to the explanation. Still, the paper is quite interesting and if you want to jump into the details, you should surely give the paper a read.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">TensorFlow PreTrained Models for Image Classification<\/h2>\n\n\n\n<p>Till now, we have discussed VGG nets, ResNets, and MobileNets in brief. And we know that we will be using these models for image classification.<\/p>\n\n\n\n<p>Fortunately, TensorFlow already provides versions of these models which have been pretrained on the <strong><a href=\"https:\/\/www.image-net.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">ImageNet<\/a><\/strong> dataset. This means that we can run a forward pass through any of these models by providing an image, and there is a very high chance that the model will be able to predict the class of the image from its wide range of 1000 classes.<\/p>\n\n\n\n<p>TensorFlow has a host of other pretrained models as well. You can visit the <strong><a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/keras\/applications\" target=\"_blank\" rel=\"noreferrer noopener\">official docs<\/a><\/strong> to get the entire list. <\/p>\n\n\n\n<p>Although in this post, we will focus on VGG16, ResNet50, and MobileNetv2.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Directory Structure<\/h2>\n\n\n\n<p>We will use the following directory structure in this tutorial.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\u251c\u2500\u2500 image_classification.py\n\u251c\u2500\u2500 input\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 image_1.jpg\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 image_2.jpg\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 image_3.jpg\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 image_4.jpg\n\u251c\u2500\u2500 outputs\n\u2502\u00a0\u00a0 ...<\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We have one Python script. This will contain the code to classify images using the three TensorFlow pretrained models.<\/li>\n\n\n\n<li>Then we have the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">input<\/code> folder which contains the images we will use for classification. This contains four images on which we will run the image classification models.<\/li>\n\n\n\n<li>Finally, we have the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">outputs<\/code> folder that will contain the classified image outputs after passing through the neural network models.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>Now that we are done with all the preliminary stuff, let&#8217;s jump into the coding part of the tutorial.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Image Classification using TensorFlow Pretrained Models<\/h2>\n\n\n\n<p>All the code that we will write, will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">image_classification.py<\/code> Python script.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Required Imports<\/h3>\n\n\n\n<p>Let&#8217;s start by importing all the libraries and modules that we will need along the way.<\/p>\n\n\n\n<div class=\"wp-block-button is-style-outline center\"><a data-sumome-listbuilder-id=\"2b425a93-e14e-4cb3-a989-ab5cc118eaf5\" class=\"wp-block-button__link has-black-color has-luminous-vivid-orange-background-color has-text-color has-background\"><b>Download the Source Code for this Tutorial<\/b><\/a><\/div>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"image_classification.py\" data-enlighter-group=\"cls_1\">import tensorflow as tf\nimport matplotlib.pyplot as plt\nimport glob as glob\nimport numpy as np\nimport argparse<\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We will use <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">matplotlib<\/code> to create the final subplot of the images with the predicted labels as the titles.<\/li>\n\n\n\n<li>And <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">argparse<\/code> will help us create and parse command line arguments.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Create the Argument Parser<\/h3>\n\n\n\n<p>While executing the Python script, we will provide the model name that we want to use for image classification. For that, we need to construct the argument parser.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"6\" data-enlighter-title=\"image_classification.py\" data-enlighter-group=\"cls_2\">parser = argparse.ArgumentParser()\nparser.add_argument('-m', '--model', default='resnet50', \n    choices=['resnet50', 'vgg16', 'mobilenet'])\nargs = vars(parser.parse_args())<\/pre>\n\n\n\n<p>We can provide any three of the above choices, that is:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">resnet50<\/code>.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">vgg16<\/code>.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">mobilenet<\/code>.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>And accordingly, the appropriate pretrained model will be used.<\/p>\n\n\n\n<p>Now, let&#8217;s create a dictionary that loads the respective model according to the model names that we pass to the command line argument while executing the script.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"10\" data-enlighter-title=\"image_classification.py\" data-enlighter-group=\"cls_3\"># model dictionary to create the mapping from model name...\n# ... to actual TensorFlow model instance\nmodels_dict = {\n    'resnet50': tf.keras.applications.resnet50.ResNet50(weights='imagenet'),\n    'vgg16': tf.keras.applications.vgg16.VGG16(weights='imagenet'),\n    'mobilenet': tf.keras.applications.mobilenet_v2.MobileNetV2(weights='imagenet'),\n}\n\nprint(f\"Using {args['model']} model...\")\n# get all the image paths\nimage_paths = glob.glob('input\/*')\nprint(f\"Found {len(image_paths)} images...\")<\/pre>\n\n\n\n<p>In the above code block, we have a <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">model_dict<\/code> dictionary. The keys are the model names that we pass through the command line argument. And the values are the TensorFlow classes that load the corresponding image classification models. We can do so by using <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">tf.keras.applications<\/code>. <\/p>\n\n\n\n<p>One thing to note in the above code block is the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">weights<\/code> argument that we are passing while loading the model. This implies that we are loading the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">imagenet<\/code> weights as the model has been pretrained on the ImageNet dataset. If we provide <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">weights=None<\/code>, then the weights will be initialized randomly and the model will not be able to classify the image correctly.<\/p>\n\n\n\n<p>After the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">model_dict<\/code> dictionary:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We are printing the pretrained model that we are using for image classification on <strong>line 18<\/strong>.<\/li>\n\n\n\n<li>Using <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">glob<\/code> to store all the image paths in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">image_paths<\/code> list on <strong>line 20<\/strong>. And printing the total number images that we have.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Classify the Images using TensorFlow Pretrained Models<\/h3>\n\n\n\n<p>Next, we will read the images, and pass them through the model to get the predictions. This follows a few simple steps.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Loop through each of the image paths.<\/li>\n\n\n\n<li>Load and resize the image to appropriate dimensions.<\/li>\n\n\n\n<li>Further preprocess the image using TensorFlow utilities.<\/li>\n\n\n\n<li>Load the pretrained model according to the model names passed through the command line argument.<\/li>\n\n\n\n<li>Forward pass the image through the pretrained model to get the intitial predictions.<\/li>\n\n\n\n<li>Process the predictions using TensorFlow&#8217;s ImagetNet utilities to get the final predictions.<\/li>\n\n\n\n<li>Create a subplot of all the images with the predicted label as the title.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>The above steps will become much more clearer after we write the code for it. Let&#8217;s write the code and then get to the explanation part.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"26\" data-enlighter-title=\"image_classification.py\" data-enlighter-group=\"cls_4\">for i, image_path in enumerate(image_paths):\n    print(f\"Processing and classifying on {image_path.split('\/')[-1]}\")\n    # read image using matplotlib to keep an original RGB copy\n    orig_image = plt.imread(image_path)\n    # read and resize the image\n    image = tf.keras.preprocessing.image.load_img(image_path, \n        target_size=(224, 224))\n    # add batch dimension\n    image = np.expand_dims(image, axis=0)\n    # preprocess the image using TensorFlow utils\n    image = tf.keras.applications.imagenet_utils.preprocess_input(image)\n\n    # load the model\n    model = models_dict[args['model']]\n    # forward pass through the model to get the predictions\n    predictions = model.predict(image)\n    processed_preds = tf.keras.applications.imagenet_utils.decode_predictions(\n        preds=predictions\n    )\n    # print(f\"Original predictions: {predictions}\")\n    print(f\"Processed predictions: {processed_preds}\")\n    print('-'*50)\n    \n    # create subplot of all images\n    plt.subplot(2, 2, i+1)\n    plt.imshow(orig_image)\n    plt.title(f\"{processed_preds[0][0][1]}, {processed_preds[0][0][2] *100:.3f}\")\n    plt.axis('off')\n\nplt.savefig(f\"outputs\/{args['model']}_output.png\")\nplt.show()\nplt.close()<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Code Explanation<\/h4>\n\n\n\n<p>Let&#8217;s go through the code.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>On <strong>line 29<\/strong>, we are reading the image using Matplotlib and keeping an original copy which we will use at the end for visualization.<\/li>\n\n\n\n<li><strong>Line 31<\/strong> uses TensorFlow&#8217;s <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">load_img<\/code> function to read and resize the image appropriately. Generally, pretrained models require the images to be resized to a specific shape. This mostly corresponds to the size to which the ImageNet images were resized to when the models were trained on the ImageNet dataset. All the models that we use here require images of size 224&#215;224. There are few other models like Inception networks which need images of size 299&#215;299.<\/li>\n\n\n\n<li>Then we use NumPy to add an extra batch dimension to the image. This makes the image shape as 1x224x224x3.<\/li>\n\n\n\n<li>On <strong>line 36<\/strong>, we use the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">imagenet_utils<\/code> to preproess the image properly one final time.<\/li>\n\n\n\n<li>Then we load the model and carry out the prediction on <strong>line 41<\/strong>. This gives us an intial set of 1000 <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">predictions<\/code>. We have to filter out the predictions to get the most appropriate one.<\/li>\n\n\n\n<li>We then use the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">decode_predictions<\/code> function from the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">imagenet_utils<\/code> which returns the top 5 predictions according to the confidence score by default.<\/li>\n\n\n\n<li>Starting from <strong>line 50<\/strong>, we create a subplot of the image and add the predicted label as the title. We also save the image by appending the model name to the image file path. This will help us later distinguish which models classified which images.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>That&#8217;s all we need for the coding part.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Executing the Python Script<\/h3>\n\n\n\n<p>Let&#8217;s now execute the Python script with a different model flag each time. Open your terminal in the current working directory. And let&#8217;s start with the VGG16 model.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">python image_classification.py  --model vgg16<\/pre>\n\n\n\n<p>The following figure shows the predictions for each of the images.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/vgg16_output.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/vgg16_output.png\" alt=\"Image Classification using TensorFlow\" class=\"wp-image-17801\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/vgg16_output.png 640w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/vgg16_output-300x225.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 7. Predictions of the VGG16 TensorFlow pretrained model.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>The VGG16 model predicted the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">gondola<\/code> and <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">tiger<\/code> with pretty high confidence. It also predicted the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">soccer_ball<\/code> correctly but with lower confidence. But for the car, it predicted is as a <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">grille<\/code>. The most plausible reason for this can be the front metal grille that is so prominent in the car image.<\/p>\n\n\n\n<p>Let&#8217;s check with the ResNet50 model now.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">python image_classification.py  --model resnet50<\/pre>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet50_output.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet50_output.png\" alt=\"Predictions of the ResNet50 TensorFlow pretrained model.\" class=\"wp-image-17803\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet50_output.png 640w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/resnet50_output-300x225.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>FIgure 8. Predictions of the ResNet50 TensorFlow pretrained model.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>The ResNet50 model is predicting the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">gondola<\/code> and <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">tiger<\/code> with even higher confidence. Interestingly, it is also predicting the car as <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">grille<\/code> with high confidence. But this time, it is detecting the ball as a <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">rugby_ball<\/code> instead of a <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">soccer_ball<\/code>. <\/p>\n\n\n\n<p>Finally, the MobileNetv2 model.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">python image_classification.py  --model mobilenet<\/pre>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/mobilenet_output.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/mobilenet_output.png\" alt=\"Predictions of the MobileNetV2 TensorFlow pretrained model.\" class=\"wp-image-17805\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/mobilenet_output.png 640w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/mobilenet_output-300x225.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 9. Predictions of the MobileNetV2 TensorFlow pretrained model.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>That&#8217;s odd! The MobileNetv2 model is predicting everything incorrectly. And that too with pretty much low confidence. We know that MobileNets are smaller models meant for lower accuracy and faster predictions. But not predicting even a tiger correctly is a bit strange. There are a few larger and newer versions of MobileNet like MobileNetV3 Large. Maybe you can give that model a try and see if the predictions are better.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">A Few Takeaways<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We saw how different models predicted the same images with different conficdence scores. And architectures like ResNets perform better than simple VGG models.<\/li>\n\n\n\n<li>Also, smaller models like MobileNets may not be well suited for predicting on complex images.<\/li>\n\n\n\n<li>You can try a few other models like VGG19 and some Inception models as well. If you do, surely let other know in the comment section.<\/li>\n\n\n\n<li>You may also try larger versions of MobileNets like the MobileNetV3 Large model and check whether it is predicting any of the images correctly. <\/li>\n\n\n\n<li>Also, try playing around by giving a few new images as inputs to the models. Analyze how they perform and how the image complexity affects the predictions.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Summary and Conclusion<\/h2>\n\n\n\n<p>In this tutorial, you learned about image classification using TensorFlow pretrained models. We used the VGG16, ResNet50, and MobileNetV2 models which were pretrained on the ImageNet dataset. We saw how they performed on different images and how smaller models like MobileNets perform worse than other models like VGG16 and ResNet50. I hope that you learned something new from this tutorial.<\/p>\n\n\n\n<p>If you have any doubts, thoughts, or suggestions, then please leave them in the comment section. I will surely address them.<\/p>\n\n\n\n<p>You can contact me using the <strong><a aria-label=\"Contact (opens in a new tab)\" href=\"https:\/\/debuggercafe.com\/contact-us\/\" target=\"_blank\" rel=\"noreferrer noopener\">Contact<\/a><\/strong> section. You can also find me on <strong><a aria-label=\"LinkedIn (opens in a new tab)\" href=\"https:\/\/www.linkedin.com\/in\/sovit-rath\/\" target=\"_blank\" rel=\"noreferrer noopener\">LinkedIn<\/a><\/strong>, and <strong><a href=\"https:\/\/x.com\/SovitRath5\" target=\"_blank\" rel=\"noreferrer noopener\">X<\/a><\/strong>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.<\/p>\n","protected":false},"author":1,"featured_media":17811,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[76,113,59,119,67,54],"tags":[77,61,123,145,206,139,168,55,202,148,166],"class_list":["post-17724","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-vision","category-convolutional-neural-networks","category-deep-learning","category-image-classification","category-keras","category-tensorflow","tag-computer-vision","tag-deep-learning","tag-image-classification","tag-mobilenet","tag-mobilenetv2","tag-resnet","tag-resnet50","tag-tensorflow","tag-tf-keras","tag-vgg","tag-vgg16"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.9 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Image Classification using TensorFlow Pretrained Models<\/title>\n<meta name=\"description\" content=\"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Image Classification using TensorFlow Pretrained Models\" \/>\n<meta property=\"og:description\" content=\"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DebuggerCafe\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/profile.php?id=100013731104496\" \/>\n<meta property=\"article:published_time\" content=\"2021-09-13T00:30:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-15T14:55:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sovit Ranjan Rath\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@SovitRath5\" \/>\n<meta name=\"twitter:site\" content=\"@SovitRath5\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sovit Ranjan Rath\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"1 minute\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\"},\"author\":{\"name\":\"Sovit Ranjan Rath\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\"},\"headline\":\"Image Classification using TensorFlow Pretrained Models\",\"datePublished\":\"2021-09-13T00:30:00+00:00\",\"dateModified\":\"2024-09-15T14:55:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\"},\"wordCount\":2348,\"commentCount\":2,\"image\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png\",\"keywords\":[\"Computer Vision\",\"Deep Learning\",\"Image Classification\",\"MobileNet\",\"MobileNetV2\",\"ResNet\",\"ResNet50\",\"TensorFlow\",\"tf.keras\",\"VGG\",\"VGG16\"],\"articleSection\":[\"Computer Vision\",\"Convolutional Neural Networks\",\"Deep Learning\",\"Image Classification\",\"Keras\",\"TensorFlow\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\",\"url\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\",\"name\":\"Image Classification using TensorFlow Pretrained Models\",\"isPartOf\":{\"@id\":\"https:\/\/debuggercafe.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png\",\"datePublished\":\"2021-09-13T00:30:00+00:00\",\"dateModified\":\"2024-09-15T14:55:49+00:00\",\"author\":{\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\"},\"description\":\"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.\",\"breadcrumb\":{\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage\",\"url\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png\",\"contentUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png\",\"width\":1000,\"height\":563,\"caption\":\"Image Classification using TensorFlow Pretrained Models\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/debuggercafe.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Image Classification using TensorFlow Pretrained Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/debuggercafe.com\/#website\",\"url\":\"https:\/\/debuggercafe.com\/\",\"name\":\"DebuggerCafe\",\"description\":\"Machine Learning and Deep Learning\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/debuggercafe.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\",\"name\":\"Sovit Ranjan Rath\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g\",\"caption\":\"Sovit Ranjan Rath\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Image Classification using TensorFlow Pretrained Models","description":"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/","og_locale":"en_US","og_type":"article","og_title":"Image Classification using TensorFlow Pretrained Models","og_description":"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.","og_url":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/","og_site_name":"DebuggerCafe","article_publisher":"https:\/\/www.facebook.com\/profile.php?id=100013731104496","article_published_time":"2021-09-13T00:30:00+00:00","article_modified_time":"2024-09-15T14:55:49+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png","type":"image\/png"}],"author":"Sovit Ranjan Rath","twitter_card":"summary_large_image","twitter_creator":"@SovitRath5","twitter_site":"@SovitRath5","twitter_misc":{"Written by":"Sovit Ranjan Rath","Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#article","isPartOf":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/"},"author":{"name":"Sovit Ranjan Rath","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752"},"headline":"Image Classification using TensorFlow Pretrained Models","datePublished":"2021-09-13T00:30:00+00:00","dateModified":"2024-09-15T14:55:49+00:00","mainEntityOfPage":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/"},"wordCount":2348,"commentCount":2,"image":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage"},"thumbnailUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png","keywords":["Computer Vision","Deep Learning","Image Classification","MobileNet","MobileNetV2","ResNet","ResNet50","TensorFlow","tf.keras","VGG","VGG16"],"articleSection":["Computer Vision","Convolutional Neural Networks","Deep Learning","Image Classification","Keras","TensorFlow"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/","url":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/","name":"Image Classification using TensorFlow Pretrained Models","isPartOf":{"@id":"https:\/\/debuggercafe.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage"},"image":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage"},"thumbnailUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png","datePublished":"2021-09-13T00:30:00+00:00","dateModified":"2024-09-15T14:55:49+00:00","author":{"@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752"},"description":"In this tutorial, we will carry out image classification using TensorFlow pretrained models like VGG16, ResNet50, and MobileNetv2.","breadcrumb":{"@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#primaryimage","url":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png","contentUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/08\/Image-Classification-using-TensorFlow-Pretrained-Models-e1629682345227.png","width":1000,"height":563,"caption":"Image Classification using TensorFlow Pretrained Models"},{"@type":"BreadcrumbList","@id":"https:\/\/debuggercafe.com\/image-classification-using-tensorflow-pretrained-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/debuggercafe.com\/"},{"@type":"ListItem","position":2,"name":"Image Classification using TensorFlow Pretrained Models"}]},{"@type":"WebSite","@id":"https:\/\/debuggercafe.com\/#website","url":"https:\/\/debuggercafe.com\/","name":"DebuggerCafe","description":"Machine Learning and Deep Learning","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/debuggercafe.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752","name":"Sovit Ranjan Rath","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g","caption":"Sovit Ranjan Rath"}}]}},"_links":{"self":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/17724","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/comments?post=17724"}],"version-history":[{"count":88,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/17724\/revisions"}],"predecessor-version":[{"id":38017,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/17724\/revisions\/38017"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/media\/17811"}],"wp:attachment":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/media?parent=17724"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/categories?post=17724"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/tags?post=17724"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}