{"id":20828,"date":"2022-01-24T06:00:00","date_gmt":"2022-01-24T00:30:00","guid":{"rendered":"https:\/\/debuggercafe.com\/?p=20828"},"modified":"2024-09-15T20:33:26","modified_gmt":"2024-09-15T15:03:26","slug":"brain-mri-classification-using-pytorch-efficientnetb0","status":"publish","type":"post","link":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/","title":{"rendered":"Brain MRI Classification using PyTorch EfficientNetB0"},"content":{"rendered":"\n<p>In this tutorial, we will use the <strong><em>PyTorch EfficientNetB0 model for brain MRI image classification<\/em><\/strong>.<\/p>\n\n\n\n<div class=\"wp-block-button is-style-outline center\"><a data-sumome-listbuilder-id=\"0e6c200f-6432-4fa0-b2dc-adbe232f3d13\" class=\"wp-block-button__link has-black-color has-luminous-vivid-orange-background-color has-text-color has-background\"><b>Download the Source Code for this Tutorial<\/b><\/a><\/div>\n\n\n\n<p>In the <strong><a href=\"https:\/\/debuggercafe.com\/transfer-learning-using-efficientnet-pytorch\/\" target=\"_blank\" rel=\"noreferrer noopener\">previous tutorial<\/a><\/strong>, we saw how to use the EfficientNetB0 model with the PyTorch deep learning framework for transfer learning. To show the power of transfer learning and fine-tuning, we trained the model on a very small Chess Pieces image dataset. By the end of that tutorial, we were able to conclude that even the smallest of the EfficientNet models perform really well. Even when the dataset has only a few hundred images.<\/p>\n\n\n\n<p>To be fair, the chess pieces images dataset was not very complex. The mediocre result was mainly because of the small dataset. And the model can perform even better if we use even more augmentations. <em>But what about more complex datasets? Like medical imaging datasets?<\/em> That is what we will be testing in this tutorial. Medical image datasets always pose a greater challenge for deep learning models. The main reason is that the images are out of domain from general benchmarking datasets like <strong><a href=\"https:\/\/www.image-net.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">ImageNet<\/a><\/strong>. Still, deep learning image classification models have come a long way. In fact, the EfficientNet models are some of the best out there. So, let&#8217;s see how they perform on these images.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-transfer-learning.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"409\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-transfer-learning.png\" alt=\"Brain MRI Classification using PyTorch EfficientNetB0\" class=\"wp-image-20923\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-transfer-learning.png 1000w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-transfer-learning-300x123.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-transfer-learning-768x314.png 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 1. Brain MRI Classification using PyTorch EfficientNetB0.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p><strong><em>We will cover the following topics in this tutorial.<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We will start with the exploration of the <strong><a href=\"https:\/\/www.kaggle.com\/masoudnickparvar\/brain-tumor-mri-dataset?select=Training\" target=\"_blank\" rel=\"noreferrer noopener\">Brain Tumor MRI Dataset<\/a><\/strong>.<\/li>\n\n\n\n<li>Then we will get to know the directory structure for this project.<\/li>\n\n\n\n<li>After that we will move into the coding part. Here, we will write the code to train the EfficientNetB0 model on this dataset. We will use transfer learning and fine-tuning.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Brain MRI Dataset<\/h2>\n\n\n\n<p>For the Brain MRI Classification using Pytorch EfficientNetB0, we choose <strong><a href=\"https:\/\/www.kaggle.com\/masoudnickparvar\/brain-tumor-mri-dataset?select=Training\" target=\"_blank\" rel=\"noreferrer noopener\">this<\/a><\/strong> dataset from Kaggle.<\/p>\n\n\n\n<p>The Brain Tumor MRI Dataset is a collection of brain MRI images containing four different classes.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>glioma<\/strong>\u00a0<\/li>\n\n\n\n<li><strong>meningioma<\/strong>\u00a0<\/li>\n\n\n\n<li><strong>no tumor<\/strong><\/li>\n\n\n\n<li><strong>pituitary<\/strong><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-mri-images-different-classes.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"872\" height=\"799\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-mri-images-different-classes.png\" alt=\"Sample class images from the Brain MRI dataset.\" class=\"wp-image-20925\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-mri-images-different-classes.png 872w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-mri-images-different-classes-300x275.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-mri-images-different-classes-768x704.png 768w\" sizes=\"auto, (max-width: 872px) 100vw, 872px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 2. Sample class images from the Brain MRI dataset.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>Out of the four classes, <strong>glioma<\/strong>, <strong style=\"font-size: revert; color: initial;\">meningioma<\/strong>, and <strong style=\"font-size: revert; color: initial;\">pituitary<\/strong> indicate that there is a tumor present in the MRI image. While <strong>no tumor <\/strong>means that there is no tumor in the brain MRI image.<\/p>\n\n\n\n<p>The dataset from Kaggle contains 5712 training images and 1311 testing images. If you take a look at the structure, then all the images are present inside their respective class directories in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">Training<\/code> and <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">Testing<\/code> folders. But we will change the structure a bit.<\/p>\n\n\n\n<p>Our final dataset structure looks something like this.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\u251c\u2500\u2500 test_images\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 glioma.jpg\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 meningioma.jpg\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 no_tumor.jpg\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pituitary.jpg\n\u251c\u2500\u2500 training\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 glioma\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 meningioma\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 notumor\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pituitary\n\u2514\u2500\u2500 validation\n    \u251c\u2500\u2500 glioma\n    \u251c\u2500\u2500 meningioma\n    \u251c\u2500\u2500 notumor\n    \u2514\u2500\u2500 pituitary<\/pre>\n\n\n\n<p>We have renamed the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">Training<\/code> and <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">Testing<\/code> folders as <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">training<\/code> and <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">validation<\/code>. Also, we have taken four images from each class of the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">validation<\/code> folder and put them in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">test_images<\/code> folder to be used for inference after training our model. The names of these images indicate the class they belong to. The inference images are not part of the training or validation set. They have been removed from those folders. <\/p>\n\n\n\n<p><strong><em>You need not worry about structuring the dataset like this. When downloading the zip file for this tutorial, you will already have the dataset in the above format.<\/em><\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Directory Structure<\/h2>\n\n\n\n<p>The directory structure for the tutorial is pretty straightforward as well.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\u251c\u2500\u2500 input\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 test_images\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 glioma.jpg\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 meningioma.jpg\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 no_tumor.jpg\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 pituitary.jpg\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 training\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 glioma [1321 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 meningioma [1339 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 notumor [1595 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 pituitary [1457 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 validation\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 glioma [299 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 meningioma [305 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 notumor [404 entries exceeds filelimit, not opening dir]\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 pituitary [299 entries exceeds filelimit, not opening dir]\n\u251c\u2500\u2500 outputs [7 entries exceeds filelimit, not opening dir]\n\u2514\u2500\u2500 src\n    \u251c\u2500\u2500 datasets.py\n    \u251c\u2500\u2500 inference.py\n    \u251c\u2500\u2500 model.py\n    \u251c\u2500\u2500 train.py\n    \u2514\u2500\u2500 utils.py\n<\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Inside the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">input<\/code> directory we have the training and validation dataset along with the test images as described in the previous section.<\/li>\n\n\n\n<li>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">outputs<\/code> directory will contain the accuracy and loss graphs for training and validation, the trained model and also the inference image resutls.<\/li>\n\n\n\n<li>And the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">src<\/code> directory contains all the Python files that we need for this tutorial\/project.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><em>If you download the zip file for this tutorial, then you will get all the folders and files in place along with the dataset. You will just have to follow through with this tutorial and understand the code before executing it.<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The PyTorch Version<\/h3>\n\n\n\n<p>If you wish to execute all the code in this tutorial on your local system, then you need PyTorch version &gt;= 1.10. As the EfficientNet pretrained models are only available starting from PyTorch version 1.10. You can install\/upgrade from <strong><a href=\"https:\/\/pytorch.org\/get-started\/locally\/\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a><\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">PyTorch EfficientNetB0 Model for Brain MRI Image Classification<\/h2>\n\n\n\n<p>Let&#8217;s start with the coding part of the tutorial now. The code in this post will remain very similar to the <strong><a href=\"https:\/\/debuggercafe.com\/transfer-learning-using-efficientnet-pytorch\/\" target=\"_blank\" rel=\"noreferrer noopener\">previous one<\/a><\/strong>. There will be only minor changes in the dataset preparation code and the number of classes in the PyTorch EfficientNetB0 model. For that reason, we will only get into the details of the post where strictly necessary. Also, the code here is very similar to many other image classification projects using PyTorch. Therefore, most of the things will be pretty straightforward.<\/p>\n\n\n\n<p><em>Also, note that all the Python code files are present inside the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">src<\/code> directory.<\/em><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Helper Functions<\/h3>\n\n\n\n<p>For the helper functions, we will write the code to save the model and save the loss and accuracy graphs to disk. All these will be saved in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">outputs<\/code> folder.<\/p>\n\n\n\n<div class=\"wp-block-button is-style-outline center\"><a data-sumome-listbuilder-id=\"0e6c200f-6432-4fa0-b2dc-adbe232f3d13\" class=\"wp-block-button__link has-black-color has-luminous-vivid-orange-background-color has-text-color has-background\"><b>Download the Source Code for this Tutorial<\/b><\/a><\/div>\n\n\n\n<p><strong><em>The code for the helper functions will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">utils.py<\/code> file.<\/em><\/strong><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"utils.py\" data-enlighter-group=\"utils_1\">import torch\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmatplotlib.style.use('ggplot')\n\ndef save_model(epochs, model, optimizer, criterion):\n    \"\"\"\n    Function to save the trained model to disk.\n    \"\"\"\n    torch.save({\n                'epoch': epochs,\n                'model_state_dict': model.state_dict(),\n                'optimizer_state_dict': optimizer.state_dict(),\n                'loss': criterion,\n                }, f\"..\/outputs\/model.pth\")\n\ndef save_plots(train_acc, valid_acc, train_loss, valid_loss):\n    \"\"\"\n    Function to save the loss and accuracy plots to disk.\n    \"\"\"\n    # accuracy plots\n    plt.figure(figsize=(10, 7))\n    plt.plot(\n        train_acc, color='green', linestyle='-', \n        label='train accuracy'\n    )\n    plt.plot(\n        valid_acc, color='blue', linestyle='-', \n        label='validataion accuracy'\n    )\n    plt.xlabel('Epochs')\n    plt.ylabel('Accuracy')\n    plt.legend()\n    plt.savefig(f\"..\/outputs\/accuracy.png\")\n    \n    # loss plots\n    plt.figure(figsize=(10, 7))\n    plt.plot(\n        train_loss, color='orange', linestyle='-', \n        label='train loss'\n    )\n    plt.plot(\n        valid_loss, color='red', linestyle='-', \n        label='validataion loss'\n    )\n    plt.xlabel('Epochs')\n    plt.ylabel('Loss')\n    plt.legend()\n    plt.savefig(f\"..\/outputs\/loss.png\")\n<\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">save_model()<\/code> function saves the trained model to disk. We save the information about the number of epochs trained for, the optimizer state, and also the loss function information. This will be helpful if we will want to resume training anytime in the future.<\/li>\n\n\n\n<li>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">save_plots()<\/code> is a simple function which plots the accuracy and loss graphs for training and validation, then saves them to disk.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Preparing the Dataset<\/h3>\n\n\n\n<p>As we already have the training and validation images inside the respective directories, so the dataset preparation will become simpler.<\/p>\n\n\n\n<p><strong><em>The code here will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">datasets.py<\/code> file.<\/em><\/strong><\/p>\n\n\n\n<p>Starting with the imports and defining the necessary constants.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"datasets.py\" data-enlighter-group=\"datasets_1\">from torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\n# Required constants.\nTRAIN_DIR = '..\/input\/training'\nVALID_DIR = '..\/input\/validation'\nIMAGE_SIZE = 224 # Image size of resize when applying transforms.\nBATCH_SIZE = 32 \nNUM_WORKERS = 4 # Number of parallel processes for data preparation.<\/pre>\n\n\n\n<p>We have the path to the training and validation images, the image size to resize to when applying the transforms, batch size, and the number of workers for data preprocessing.<\/p>\n\n\n\n<p>Next, we define the functions for the training and the validation transforms.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"10\" data-enlighter-title=\"datasets.py\" data-enlighter-group=\"datasets_2\"># Training transforms\ndef get_train_transform(IMAGE_SIZE):\n    train_transform = transforms.Compose([\n        transforms.Resize((IMAGE_SIZE, IMAGE_SIZE)),\n        transforms.RandomHorizontalFlip(p=0.5),\n        transforms.RandomVerticalFlip(p=0.5),\n        transforms.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5)),\n        transforms.RandomAdjustSharpness(sharpness_factor=2, p=0.5),\n        transforms.ToTensor(),\n        transforms.Normalize(\n            mean=[0.485, 0.456, 0.406],\n            std=[0.229, 0.224, 0.225]\n            )\n    ])\n    return train_transform\n\n# Validation transforms\ndef get_valid_transform(IMAGE_SIZE):\n    valid_transform = transforms.Compose([\n        transforms.Resize((IMAGE_SIZE, IMAGE_SIZE)),\n        transforms.ToTensor(),\n        transforms.Normalize(\n            mean=[0.485, 0.456, 0.406],\n            std=[0.229, 0.224, 0.225]\n            )\n    ])\n    return valid_transform<\/pre>\n\n\n\n<p>For the training transforms we apply the following augmentations:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">RandomHorizontalFlip<\/code>: Flipping the image horizontally randomly.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">RandomVerticalFlip<\/code>: Randomly flipping the image vertically.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">GaussianBlur<\/code>: Applying Gaussian blurring to the image.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">RandomAdjustSharpness<\/code>: Changing the sharpness of the image randomly.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><em>Note <\/em><\/strong><em>that we do not apply color or contrast augmentation here. The reason is that it may affect how the MRI images should appear in a negative way and the model might miss out on the original color information that it needs for proper learning. Although, it can be taken up as a future task to experiment with.<\/em><\/p>\n\n\n\n<p>We are applying the ImageNet normalization values for both, training and validation transform. This is because we will use a pretrained EfficientNetB0 model here.<\/p>\n\n\n\n<p>Finally, we write the code to prepare the datasets and the data loaders.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"37\" data-enlighter-title=\"datasets.py\" data-enlighter-group=\"datasets_3\">def get_datasets():\n    \"\"\"\n    Function to prepare the Datasets.\n\n    Returns the training and validation datasets along \n    with the class names.\n    \"\"\"\n    dataset_train = datasets.ImageFolder(\n        TRAIN_DIR, \n        transform=(get_train_transform(IMAGE_SIZE))\n    )\n    dataset_valid = datasets.ImageFolder(\n        VALID_DIR, \n        transform=(get_valid_transform(IMAGE_SIZE))\n    )\n    return dataset_train, dataset_valid, dataset_train.classes\n\ndef get_data_loaders(dataset_train, dataset_valid):\n    \"\"\"\n    Prepares the training and validation data loaders.\n\n    :param dataset_train: The training dataset.\n    :param dataset_valid: The validation dataset.\n\n    Returns the training and validation data loaders.\n    \"\"\"\n    train_loader = DataLoader(\n        dataset_train, batch_size=BATCH_SIZE, \n        shuffle=True, num_workers=NUM_WORKERS\n    )\n    valid_loader = DataLoader(\n        dataset_valid, batch_size=BATCH_SIZE, \n        shuffle=False, num_workers=NUM_WORKERS\n    )\n    return train_loader, valid_loader <\/pre>\n\n\n\n<p>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">get_datasets()<\/code> function prepares the training and validation datasets and returns them along with the class names.<\/p>\n\n\n\n<p>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">get_data_loaders()<\/code> function takes in the datasets as parameters and prepares the training and validation data loaders.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The EfficientNetB0 Model<\/h3>\n\n\n\n<p>We can easily load the pretrained EfficientNetB0 model from <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">torchvision.models<\/code>. And that is what we will do here as well.<\/p>\n\n\n\n<p><strong><em>The code to prepare the model will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">model.py<\/code> file.<\/em><\/strong><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"model.py\" data-enlighter-group=\"model_1\">import torchvision.models as models\nimport torch.nn as nn\n\ndef build_model(pretrained=True, fine_tune=True, num_classes=10):\n    if pretrained:\n        print('[INFO]: Loading pre-trained weights')\n    else:\n        print('[INFO]: Not loading pre-trained weights')\n    model = models.efficientnet_b0(pretrained=pretrained)\n\n    if fine_tune:\n        print('[INFO]: Fine-tuning all layers...')\n        for params in model.parameters():\n            params.requires_grad = True\n    elif not fine_tune:\n        print('[INFO]: Freezing hidden layers...')\n        for params in model.parameters():\n            params.requires_grad = False\n\n    # Change the final classification head.\n    model.classifier[1] = nn.Linear(in_features=1280, out_features=num_classes)\n    return model<\/pre>\n\n\n\n<p>On <strong>line 21<\/strong>, we are just changing the number of classes as per our dataset here. That&#8217;s it. Our PyTorch EfficientNet model for brain MRI image classification is ready. Although it is worth noting that we will be loading the pretrained weights and fine-tuning all the layers as well. When the model was trained on the ImageNet dataset, it is very unlikely that it has seen any brain MRI images. So, by loading pretrained ImageNet weights, we already start at a good place. Then we slowly update the weights according to our dataset.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Training Script<\/h3>\n\n\n\n<p>This is the final script that we need to write the code before we start the training. <\/p>\n\n\n\n<p><strong><em>The code for the training script will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">train.py<\/code> file.<\/em><\/strong><\/p>\n\n\n\n<p>Starting with the imports and constructing the argument parser.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"train.py\" data-enlighter-group=\"train_1\">import torch\nimport argparse\nimport torch.nn as nn\nimport torch.optim as optim\nimport time\n\nfrom tqdm.auto import tqdm\n\nfrom model import build_model\nfrom datasets import get_datasets, get_data_loaders\nfrom utils import save_model, save_plots\n\n# Construct the argument parser.\nparser = argparse.ArgumentParser()\nparser.add_argument(\n    '-e', '--epochs', type=int, default=20,\n    help='Number of epochs to train our network for'\n)\nparser.add_argument(\n    '-lr', '--learning-rate', type=float,\n    dest='learning_rate', default=0.0001,\n    help='Learning rate for training the model'\n)\nargs = vars(parser.parse_args())<\/pre>\n\n\n\n<p>We have two flags for the argument parser:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">--epochs<\/code>: The number of epochs to train for.<\/li>\n\n\n\n<li><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">--learning-rate<\/code>: The learning rate for the optimizer.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Training and Validation Functions<\/h3>\n\n\n\n<p>First, the training function.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"25\" data-enlighter-title=\"train.py\" data-enlighter-group=\"train_2\"># Training function.\ndef train(model, trainloader, optimizer, criterion):\n    model.train()\n    print('Training')\n    train_running_loss = 0.0\n    train_running_correct = 0\n    counter = 0\n    for i, data in tqdm(enumerate(trainloader), total=len(trainloader)):\n        counter += 1\n        image, labels = data\n        image = image.to(device)\n        labels = labels.to(device)\n        optimizer.zero_grad()\n        # Forward pass.\n        outputs = model(image)\n        # Calculate the loss.\n        loss = criterion(outputs, labels)\n        train_running_loss += loss.item()\n        # Calculate the accuracy.\n        _, preds = torch.max(outputs.data, 1)\n        train_running_correct += (preds == labels).sum().item()\n        # Backpropagation\n        loss.backward()\n        # Update the weights.\n        optimizer.step()\n    \n    # Loss and accuracy for the complete epoch.\n    epoch_loss = train_running_loss \/ counter\n    epoch_acc = 100. * (train_running_correct \/ len(trainloader.dataset))\n    return epoch_loss, epoch_acc<\/pre>\n\n\n\n<p>It returns the loss and accuracy for each epoch.<\/p>\n\n\n\n<p>Now, the validation function, which returns the accuracy and loss values as well.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"55\" data-enlighter-title=\"train.py\" data-enlighter-group=\"train_3\"># Validation function.\ndef validate(model, testloader, criterion):\n    model.eval()\n    print('Validation')\n    valid_running_loss = 0.0\n    valid_running_correct = 0\n    counter = 0\n    with torch.no_grad():\n        for i, data in tqdm(enumerate(testloader), total=len(testloader)):\n            counter += 1\n            \n            image, labels = data\n            image = image.to(device)\n            labels = labels.to(device)\n            # Forward pass.\n            outputs = model(image)\n            # Calculate the loss.\n            loss = criterion(outputs, labels)\n            valid_running_loss += loss.item()\n            # Calculate the accuracy.\n            _, preds = torch.max(outputs.data, 1)\n            valid_running_correct += (preds == labels).sum().item()\n        \n    # Loss and accuracy for the complete epoch.\n    epoch_loss = valid_running_loss \/ counter\n    epoch_acc = 100. * (valid_running_correct \/ len(testloader.dataset))\n    return epoch_loss, epoch_acc<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">The Main Code Block<\/h3>\n\n\n\n<p>The final part of the training script is writing the main code block.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"82\" data-enlighter-title=\"train.py\" data-enlighter-group=\"train_4\">if __name__ == '__main__':\n    # Load the training and validation datasets.\n    dataset_train, dataset_valid, dataset_classes = get_datasets()\n    print(f\"[INFO]: Number of training images: {len(dataset_train)}\")\n    print(f\"[INFO]: Number of validation images: {len(dataset_valid)}\")\n    print(f\"[INFO]: Class names: {dataset_classes}\\n\")\n    # Load the training and validation data loaders.\n    train_loader, valid_loader = get_data_loaders(dataset_train, dataset_valid)\n\n    # Learning_parameters. \n    lr = args['learning_rate']\n    epochs = args['epochs']\n    device = ('cuda' if torch.cuda.is_available() else 'cpu')\n    print(f\"Computation device: {device}\")\n    print(f\"Learning rate: {lr}\")\n    print(f\"Epochs to train for: {epochs}\\n\")\n\n    model = build_model(\n        pretrained=True,\n        fine_tune=True, \n        num_classes=len(dataset_classes)\n    ).to(device)\n    \n    # Total parameters and trainable parameters.\n    total_params = sum(p.numel() for p in model.parameters())\n    print(f\"{total_params:,} total parameters.\")\n    total_trainable_params = sum(\n        p.numel() for p in model.parameters() if p.requires_grad)\n    print(f\"{total_trainable_params:,} training parameters.\")\n\n    # Optimizer.\n    optimizer = optim.Adam(model.parameters(), lr=lr)\n    # Loss function.\n    criterion = nn.CrossEntropyLoss()\n\n    # Lists to keep track of losses and accuracies.\n    train_loss, valid_loss = [], []\n    train_acc, valid_acc = [], []\n    # Start the training.\n    for epoch in range(epochs):\n        print(f\"[INFO]: Epoch {epoch+1} of {epochs}\")\n        train_epoch_loss, train_epoch_acc = train(model, train_loader, \n                                                optimizer, criterion)\n        valid_epoch_loss, valid_epoch_acc = validate(model, valid_loader,  \n                                                    criterion)\n        train_loss.append(train_epoch_loss)\n        valid_loss.append(valid_epoch_loss)\n        train_acc.append(train_epoch_acc)\n        valid_acc.append(valid_epoch_acc)\n        print(f\"Training loss: {train_epoch_loss:.3f}, training acc: {train_epoch_acc:.3f}\")\n        print(f\"Validation loss: {valid_epoch_loss:.3f}, validation acc: {valid_epoch_acc:.3f}\")\n        print('-'*50)\n        time.sleep(5)\n        \n    # Save the trained model weights.\n    save_model(epochs, model, optimizer, criterion)\n    # Save the loss and accuracy plots.\n    save_plots(train_acc, valid_acc, train_loss, valid_loss)\n    print('TRAINING COMPLETE')<\/pre>\n\n\n\n<p>The above main code block includes the following things:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We start with preparing the datasets and data loaders (<strong>lines 84 to 89<\/strong>).<\/li>\n\n\n\n<li>After the learning parameters, we initialize the model on <strong>line 99<\/strong>.<\/li>\n\n\n\n<li>We start the training loop from <strong>line 121<\/strong>. After each epoch, we are printing the loss and accuracy values for both training and validation.<\/li>\n\n\n\n<li>After the training completes, we save the model and the graphs to disk.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>This completes all the code we need for training the PyTorch EfficientNetB0 model on the brain MRI classification dataset.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Execute train.py to Start Training<\/h3>\n\n\n\n<p>Finally, we have reached the training phase in the tutorial. <\/p>\n\n\n\n<p>You may open your terminal\/command line from the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">src<\/code> directory and execute the following command to start the training.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">python train.py --epochs 35<\/pre>\n\n\n\n<p>We are training the model for 35 epochs with the default learning rate of 0.0001.<\/p>\n\n\n\n<p>The following block shows the truncated output.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"raw\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">[INFO]: Number of training images: 5712\n[INFO]: Number of validation images: 1307\n[INFO]: Class names: ['glioma', 'meningioma', 'notumor', 'pituitary']\n\nComputation device: cuda\nLearning rate: 0.0001\nEpochs to train for: 35\n\n[INFO]: Loading pre-trained weights\n[INFO]: Fine-tuning all layers...\n4,012,672 total parameters.\n4,012,672 training parameters.\n[INFO]: Epoch 1 of 35\nTraining\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 179\/179 [00:12&lt;00:00, 14.79it\/s]\nValidation\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 41\/41 [00:00&lt;00:00, 42.34it\/s]\nTraining loss: 0.498, training acc: 83.718\nValidation loss: 0.222, validation acc: 91.890\n--------------------------------------------------\n...\n[INFO]: Epoch 35 of 35\nTraining\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 179\/179 [00:11&lt;00:00, 15.39it\/s]\nValidation\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 41\/41 [00:00&lt;00:00, 42.93it\/s]\nTraining loss: 0.011, training acc: 99.667\nValidation loss: 0.009, validation acc: 99.694\n--------------------------------------------------\nTRAINING COMPLETE<\/pre>\n\n\n\n<p>After the last epoch, we have a validation accuracy of more than 99%  and a validation loss of 0.009. This looks good.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/accuracy-1.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"700\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/accuracy-1.png\" alt=\"Accuracy after training PyTorch EfficientNetB0 on Brain MRI classification dataset.\" class=\"wp-image-20928\" style=\"width:840px;height:588px\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/accuracy-1.png 1000w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/accuracy-1-300x210.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/accuracy-1-768x538.png 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 3. Accuracy after training PyTorch EfficientNetB0 on Brain MRI classification dataset.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/loss-brain-mri-image-classification.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"700\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/loss-brain-mri-image-classification.png\" alt=\"Loss after training PyTorch EfficientNetB0 on Brain MRI classification dataset.\" class=\"wp-image-20930\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/loss-brain-mri-image-classification.png 1000w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/loss-brain-mri-image-classification-300x210.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/loss-brain-mri-image-classification-768x538.png 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 4. Loss after training PyTorch EfficientNetB0 on Brain MRI classification dataset.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>The accuracy and loss graphs also look pretty good. For both, training and validation, there is not much fluctuation in the plots.<\/p>\n\n\n\n<p>We will get to know how well the model has learned once we run the inference on the test images.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Inference Script<\/h3>\n\n\n\n<p>In this section, we will write the inference code which we will use to predict the classes on unseen Brain MRI images using the trained model.<\/p>\n\n\n\n<p><strong><em>The code in this section will go into the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">inference.py<\/code> script.<\/em><\/strong><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"inference.py\" data-enlighter-group=\"inference_1\">import torch\nimport cv2\nimport numpy as np\nimport glob as glob\nimport os\n\nfrom model import build_model\nfrom torchvision import transforms\n\n# Constants.\nDATA_PATH = '..\/input\/test_images'\nIMAGE_SIZE = 224\nDEVICE = 'cpu'\n\n# Class names.\nclass_names = ['glioma', 'meningioma', 'no_tumor', 'pituitary']<\/pre>\n\n\n\n<p>Above, we first import all the required modules. Then we define a few constants from <strong>line 11<\/strong> and a list containing the class names on <strong>line 16<\/strong>.<\/p>\n\n\n\n<p>Next, we initialize the model and load the trained weights.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"17\" data-enlighter-title=\"inference.py\" data-enlighter-group=\"inference_2\"># Load the trained model.\nmodel = build_model(pretrained=False, fine_tune=False, num_classes=4)\ncheckpoint = torch.load('..\/outputs\/model.pth', map_location=DEVICE)\nprint('Loading trained model weights...')\nmodel.load_state_dict(checkpoint['model_state_dict'])<\/pre>\n\n\n\n<p>The final part is iterating over all the test images and running the inference on each of them.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"22\" data-enlighter-title=\"inference.py\" data-enlighter-group=\"inference_3\"># Get all the test image paths.\nall_image_paths = glob.glob(f\"{DATA_PATH}\/*\")\n# Iterate over all the images and do forward pass.\nfor image_path in all_image_paths:\n    # Get the ground truth class name from the image path.\n    gt_class_name = image_path.split(os.path.sep)[-1].split('.')[0]\n    # Read the image and create a copy.\n    image = cv2.imread(image_path)\n    orig_image = image.copy()\n    \n    # Preprocess the image\n    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n    transform = transforms.Compose([\n        transforms.ToPILImage(),\n        transforms.Resize((IMAGE_SIZE, IMAGE_SIZE)),\n        transforms.ToTensor(),\n        transforms.Normalize(\n            mean=[0.485, 0.456, 0.406],\n            std=[0.229, 0.224, 0.225]\n        )\n    ])\n    image = transform(image)\n    image = torch.unsqueeze(image, 0)\n    image = image.to(DEVICE)\n    \n    # Forward pass throught the image.\n    outputs = model(image)\n    outputs = outputs.detach().numpy()\n    pred_class_name = class_names[np.argmax(outputs[0])]\n    print(f\"GT: {gt_class_name}, Pred: {pred_class_name.lower()}\")\n    # Annotate the image with ground truth.\n    cv2.putText(\n        orig_image, f\"GT: {gt_class_name}\",\n        (10, 25), cv2.FONT_HERSHEY_SIMPLEX,\n        0.8, (0, 255, 0), 2, lineType=cv2.LINE_AA\n    )\n    # Annotate the image with prediction.\n    cv2.putText(\n        orig_image, f\"Pred: {pred_class_name.lower()}\",\n        (10, 55), cv2.FONT_HERSHEY_SIMPLEX,\n        0.8, (100, 100, 225), 2, lineType=cv2.LINE_AA\n    ) \n    cv2.imshow('Result', orig_image)\n    cv2.waitKey(0)\n    cv2.imwrite(f\"..\/outputs\/{gt_class_name}.png\", orig_image)<\/pre>\n\n\n\n<p>We store all the test image paths in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">all_image_paths<\/code> list. Then we start to iterate over these paths from <strong>line 25<\/strong>. <\/p>\n\n\n\n<p>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">gt_class_name<\/code> on <strong>line 27<\/strong> stores the ground truth class of the current image. Starting from <strong>line 29<\/strong>, we read the image, create a copy of it, change the color to RGB format, and apply the preprocessing transforms. The forward pass happens on <strong>line 48<\/strong>. On <strong>line 50<\/strong>, <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">pred_class_name<\/code> stores the predicted class of the image. Then we print the ground truth and predicted class name and annotate the original image with the same. Finally, we show the result and save it to disk.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Execute inference.py<\/h3>\n\n\n\n<p>Let&#8217;s check how our model performs in the test images.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">python inference.py <\/pre>\n\n\n\n<p>The following output is from the terminal.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"raw\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">[INFO]: Not loading pre-trained weights\n[INFO]: Freezing hidden layers...\nLoading trained model weights...\nGT: glioma, Pred: glioma\nGT: pituitary, Pred: pituitary\nGT: no_tumor, Pred: no_tumor\nGT: meningioma, Pred: meningioma<\/pre>\n\n\n\n<p>And the output image results.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-brain-mri-classification-inference-results.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"799\" height=\"790\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-brain-mri-classification-inference-results.png\" alt=\"PyTorch EfficientNetB0 Brain MRI classification inference results.\" class=\"wp-image-20932\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-brain-mri-classification-inference-results.png 799w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-brain-mri-classification-inference-results-300x297.png 300w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/pytorch-efficientnetb0-brain-mri-classification-inference-results-768x759.png 768w\" sizes=\"auto, (max-width: 799px) 100vw, 799px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 5. PyTorch EfficientNetB0 Brain MRI classification inference results.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>The model is able to predict all the image classes correctly. That&#8217;s great. It has learned the features of the images from the dataset really well. Most probably, any other model from scratch with around 4 million parameters would have never been able to achieve such results. This really shows the power of both, transfer learning and the EfficientNetB0 model.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Further Experiments<\/h2>\n\n\n\n<p>There are a few things that we may do to improve the overall project and performance of the model.<\/p>\n\n\n\n<p>You may have observed that all the MRI images have black pixels around them and the brain MRI is mostly at the center. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-msi-image-showing-black-pixels.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"526\" height=\"533\" src=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-msi-image-showing-black-pixels.png\" alt=\"Brain MRI image showing black pixels that can be cropped out.\" class=\"wp-image-20934\" style=\"width:526px;height:533px\" srcset=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-msi-image-showing-black-pixels.png 526w, https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/brain-msi-image-showing-black-pixels-296x300.png 296w\" sizes=\"auto, (max-width: 526px) 100vw, 526px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>Figure 6. Brain MRI image showing black pixels that can be cropped out.<\/strong><\/figcaption><\/figure>\n<\/div>\n\n\n<p>This means that the model needs to focus on that MRI part only and we do not need the black borders\/pixels. Most probably, we can devise a way to safely crop out the black pixels and train the model again. This is very likely to improve performance. Do let us know in the comment section if you try this experiment.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Summary and Conclusion<\/h2>\n\n\n\n<p>In this tutorial, we carried out Brain MRI Classification using PyTorch EfficientNetB0. We started with exploring the dataset, then trained the EfficientNetB0 model, and finally, ran the inference. In the end, we also discussed some possible ways to improve the model&#8217;s performance further. I hope that this tutorial was helpful to you.<\/p>\n\n\n\n<p>If you have any doubts, thoughts, or suggestions, please leave them in the comment section. I will surely address them.<\/p>\n\n\n\n<p>You can contact me using the <strong><a aria-label=\"Contact (opens in a new tab)\" href=\"https:\/\/debuggercafe.com\/contact-us\/\" target=\"_blank\" rel=\"noreferrer noopener\">Contact<\/a><\/strong> section. You can also find me on <strong><a aria-label=\"LinkedIn (opens in a new tab)\" href=\"https:\/\/www.linkedin.com\/in\/sovit-rath\/\" target=\"_blank\" rel=\"noreferrer noopener\">LinkedIn<\/a><\/strong>, and <strong><a href=\"https:\/\/x.com\/SovitRath5\" target=\"_blank\" rel=\"noreferrer noopener\">X<\/a><\/strong>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post, we explore how to carry out transfer learning and fine-tuning an EfficientNetB0 model to classify brain MRI images.<\/p>\n","protected":false},"author":1,"featured_media":20937,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[76,113,59,129,17,57,90],"tags":[229,77,112,61,230,221,222,123,62,91],"class_list":["post-20828","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-vision","category-convolutional-neural-networks","category-deep-learning","category-image-segmentation","category-machine-learning","category-neural-networks","category-pytorch","tag-brain-mri-image-classification","tag-computer-vision","tag-convolutional-neural-networks","tag-deep-learning","tag-deep-learning-in-medical-imaging","tag-efficientnet","tag-efficientnetb0","tag-image-classification","tag-neural-networks","tag-pytorch"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.9 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Brain MRI Classification using PyTorch EfficientNetB0<\/title>\n<meta name=\"description\" content=\"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Brain MRI Classification using PyTorch EfficientNetB0\" \/>\n<meta property=\"og:description\" content=\"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\" \/>\n<meta property=\"og:site_name\" content=\"DebuggerCafe\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/profile.php?id=100013731104496\" \/>\n<meta property=\"article:published_time\" content=\"2022-01-24T00:30:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-15T15:03:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sovit Ranjan Rath\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@SovitRath5\" \/>\n<meta name=\"twitter:site\" content=\"@SovitRath5\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sovit Ranjan Rath\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"1 minute\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\"},\"author\":{\"name\":\"Sovit Ranjan Rath\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\"},\"headline\":\"Brain MRI Classification using PyTorch EfficientNetB0\",\"datePublished\":\"2022-01-24T00:30:00+00:00\",\"dateModified\":\"2024-09-15T15:03:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\"},\"wordCount\":2066,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png\",\"keywords\":[\"Brain MRI Image Classification\",\"Computer Vision\",\"Convolutional Neural Networks\",\"Deep Learning\",\"Deep Learning in Medical Imaging\",\"EfficientNet\",\"EfficientNetB0\",\"Image Classification\",\"Neural Networks\",\"PyTorch\"],\"articleSection\":[\"Computer Vision\",\"Convolutional Neural Networks\",\"Deep Learning\",\"Image Segmentation\",\"Machine Learning\",\"Neural Networks\",\"PyTorch\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\",\"url\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\",\"name\":\"Brain MRI Classification using PyTorch EfficientNetB0\",\"isPartOf\":{\"@id\":\"https:\/\/debuggercafe.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png\",\"datePublished\":\"2022-01-24T00:30:00+00:00\",\"dateModified\":\"2024-09-15T15:03:26+00:00\",\"author\":{\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\"},\"description\":\"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.\",\"breadcrumb\":{\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage\",\"url\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png\",\"contentUrl\":\"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png\",\"width\":1000,\"height\":563,\"caption\":\"Brain MRI Classification using PyTorch EfficientNetB0\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/debuggercafe.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Brain MRI Classification using PyTorch EfficientNetB0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/debuggercafe.com\/#website\",\"url\":\"https:\/\/debuggercafe.com\/\",\"name\":\"DebuggerCafe\",\"description\":\"Machine Learning and Deep Learning\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/debuggercafe.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752\",\"name\":\"Sovit Ranjan Rath\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/debuggercafe.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g\",\"caption\":\"Sovit Ranjan Rath\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Brain MRI Classification using PyTorch EfficientNetB0","description":"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/","og_locale":"en_US","og_type":"article","og_title":"Brain MRI Classification using PyTorch EfficientNetB0","og_description":"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.","og_url":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/","og_site_name":"DebuggerCafe","article_publisher":"https:\/\/www.facebook.com\/profile.php?id=100013731104496","article_published_time":"2022-01-24T00:30:00+00:00","article_modified_time":"2024-09-15T15:03:26+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png","type":"image\/png"}],"author":"Sovit Ranjan Rath","twitter_card":"summary_large_image","twitter_creator":"@SovitRath5","twitter_site":"@SovitRath5","twitter_misc":{"Written by":"Sovit Ranjan Rath","Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#article","isPartOf":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/"},"author":{"name":"Sovit Ranjan Rath","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752"},"headline":"Brain MRI Classification using PyTorch EfficientNetB0","datePublished":"2022-01-24T00:30:00+00:00","dateModified":"2024-09-15T15:03:26+00:00","mainEntityOfPage":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/"},"wordCount":2066,"commentCount":0,"image":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage"},"thumbnailUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png","keywords":["Brain MRI Image Classification","Computer Vision","Convolutional Neural Networks","Deep Learning","Deep Learning in Medical Imaging","EfficientNet","EfficientNetB0","Image Classification","Neural Networks","PyTorch"],"articleSection":["Computer Vision","Convolutional Neural Networks","Deep Learning","Image Segmentation","Machine Learning","Neural Networks","PyTorch"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/","url":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/","name":"Brain MRI Classification using PyTorch EfficientNetB0","isPartOf":{"@id":"https:\/\/debuggercafe.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage"},"image":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage"},"thumbnailUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png","datePublished":"2022-01-24T00:30:00+00:00","dateModified":"2024-09-15T15:03:26+00:00","author":{"@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752"},"description":"Learn how to do carry out transfer learning using EfficientNetB0 with PyTorch for brain MRI image classification.","breadcrumb":{"@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#primaryimage","url":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png","contentUrl":"https:\/\/debuggercafe.com\/wp-content\/uploads\/2021\/12\/Brain-MRI-Classification-using-PyTorch-EfficientNetB0-e1640913058183.png","width":1000,"height":563,"caption":"Brain MRI Classification using PyTorch EfficientNetB0"},{"@type":"BreadcrumbList","@id":"https:\/\/debuggercafe.com\/brain-mri-classification-using-pytorch-efficientnetb0\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/debuggercafe.com\/"},{"@type":"ListItem","position":2,"name":"Brain MRI Classification using PyTorch EfficientNetB0"}]},{"@type":"WebSite","@id":"https:\/\/debuggercafe.com\/#website","url":"https:\/\/debuggercafe.com\/","name":"DebuggerCafe","description":"Machine Learning and Deep Learning","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/debuggercafe.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/27719b14d930bd4a88ade40d18b0a752","name":"Sovit Ranjan Rath","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/debuggercafe.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f71ca13ec56d630e7d8045e8b846396068791aa204936c3d74d721c6dd2b4d3c?s=96&d=mm&r=g","caption":"Sovit Ranjan Rath"}}]}},"_links":{"self":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/20828","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/comments?post=20828"}],"version-history":[{"count":120,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/20828\/revisions"}],"predecessor-version":[{"id":38035,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/posts\/20828\/revisions\/38035"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/media\/20937"}],"wp:attachment":[{"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/media?parent=20828"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/categories?post=20828"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/debuggercafe.com\/wp-json\/wp\/v2\/tags?post=20828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}