<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>R on Data Science Guts</title>
    <link>https://datascienceguts.com/categories/r/</link>
    <description>Recent content in R on Data Science Guts</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 25 Jan 2023 01:00:00 +0000</lastBuildDate>
    
        <atom:link href="https://datascienceguts.com/categories/r/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Deep learning–based automated measurements of the scrotal circumference of Norwegian Red bulls from 3D images</title>
      <link>https://datascienceguts.com/2023/01/deep-learningbased-automated-measurements-of-the-scrotal-circumference-of-norwegian-red-bulls-from-3d-images/</link>
      <pubDate>Wed, 25 Jan 2023 01:00:00 +0000</pubDate>
      
      <guid>https://datascienceguts.com/2023/01/deep-learningbased-automated-measurements-of-the-scrotal-circumference-of-norwegian-red-bulls-from-3d-images/</guid>
      <description>&lt;p&gt;Working on Computer Vision tasks is always exciting for me. During my carrier I was working with many different types of images and was solving many different problems related to them in the fields of biology, medicine, genetics, climatology and many more. Today I would like to tell you about one of most extraordinary use cases I’ve ever worked on.&lt;/p&gt;
&lt;div id=&#34;the-problem&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;The problem&lt;/h1&gt;
&lt;p&gt;Computer Vision can be applied in many different fields, sky is the limit, but to be completely honest I would never guess that someday I would work on &lt;strong&gt;automatic measurement of the scrotal circumference of Norwegian Red bulls&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Everything started around year ago when my former student, now a PhD candidate at &lt;a href=&#34;https://www.linkedin.com/school/inn-norway/&#34;&gt;Inland Norway University of Applied Sciences&lt;/a&gt;, &lt;a href=&#34;https://www.linkedin.com/in/joanna-bremer-95b885166/&#34;&gt;Joanna Bremer&lt;/a&gt; wrote me an e-mail with a simple question: Can we measure a scrotal circumference from the 3D images using deep learning? From this moment i was hooked!&lt;/p&gt;
&lt;p&gt;I turns out that it is a major agricultural trend to automate measurements of different physiological and behavioral traits. Scrotal circumference is an essential part of the selection criteria for bulls in breeding programs. Traditionally circumference is measured manually with the use of scrotal tape. Automation of this process and implementation into feeding stations would be a valuable tool for performance testing stations and bovine semen collection centers and it would improve the safety and welfare of both technicians and animals.&lt;/p&gt;
&lt;p&gt;Before we will show you our solution we should mention other members of our team. Beside &lt;a href=&#34;https://www.linkedin.com/in/michal-maj116/&#34;&gt;me&lt;/a&gt; and &lt;a href=&#34;https://www.linkedin.com/in/joanna-bremer-95b885166/&#34;&gt;Joanna&lt;/a&gt; there was also &lt;a href=&#34;https://www.linkedin.com/in/kommisrud/&#34;&gt;Elisabeth Kommisrud&lt;/a&gt; who is Joanna’s mentor and supervisor and &lt;a href=&#34;https://www.linkedin.com/in/%C3%B8yvind-nordb%C3%B8-02b812230/&#34;&gt;Øyvind Nordbø&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We would also like to thank Hallstein Holen and his team at the &lt;a href=&#34;https://www.linkedin.com/company/geno/&#34;&gt;Geno&lt;/a&gt; performance testing station: Jan Tore Rosingholm, Erik Skogli, Sigmund Høibakken and Stein Marius Brumoen for their time and indispensable help with data collection.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;the-solution&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;The solution&lt;/h1&gt;
&lt;p&gt;To crack this case we basically used 3 computer vision algorithms and some fancy mathematics. Our solution can be summarized in this steps:&lt;/p&gt;
&lt;ol style=&#34;list-style-type: decimal&#34;&gt;
&lt;li&gt;Semantic segmentation of the scrotum&lt;/li&gt;
&lt;li&gt;Connected–component labeling (CCL) algorithm for the segmentation artefacts&lt;/li&gt;
&lt;li&gt;Direct Linear Least Squares fitting of an ellipse to the predicted scrotum&lt;/li&gt;
&lt;li&gt;Approximation of the scrotal circumference&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the beginning we had nothing except for the images of the bulls taken using 3D camera, that shows how far is the object from the lens. We decided to create segmentation masks for the scrotum and train U-Net to predict the scrotum location.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://ars.els-cdn.com/content/image/1-s2.0-S2772375522000983-gr1.jpg&#34;
     alt=&#34;Segmented scrotum&#34;/&gt;&lt;/p&gt;
&lt;p&gt;For most of the images, the predicted segmentation mask contained one solid object, which was expected and desirable. Some artefacts that created a second smaller object for the remaining images were found in the predicted segmentation mask. To solve this problem, we used a connected–component labeling (CCL) algorithm (also known as blob extraction or region labeling) to count the number of solid objects in a predicted segmentation mask. In the case of finding more than one object, only the object with the highest area would stay on the segmentation mask.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://ars.els-cdn.com/content/image/1-s2.0-S2772375522000983-gr3.jpg&#34;
     alt=&#34;Multiple objects as a result of segmentation&#34;/&gt;&lt;/p&gt;
&lt;p&gt;After segmentation and cleaning faze, the Direct Linear Least Squares algorithm was used to fit an ellipse onto the boundary of a segmented mask.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://ars.els-cdn.com/content/image/1-s2.0-S2772375522000983-gr4.jpg&#34;
     alt=&#34;Ellipse fitted to the predicted scrotum&#34;/&gt;&lt;/p&gt;
&lt;p&gt;In the final step we used Pade approximation combined with the distance and angle per pixel informations to calculate final scrotal circumference.&lt;/p&gt;
&lt;p&gt;If you wanr to know more about the process, results and validation of our method check out our &lt;a href=&#34;https://www.sciencedirect.com/science/article/pii/S2772375522000983?dgcid=coauthor#fig0001&#34;&gt;paper&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
</description>
      
            <category>computer vision</category>
      
            <category>u-net</category>
      
            <category>semantic segmentation</category>
      
            <category>deep learning</category>
      
      
            <category>R</category>
      
            <category>Python</category>
      
    </item>
    
    <item>
      <title>Explaining predictions of Convolutional Neural Networks with &#39;sauron&#39; package.</title>
      <link>https://datascienceguts.com/2021/01/explaining-predictions-of-convolutional-neural-networks-with-sauron-package/</link>
      <pubDate>Sun, 10 Jan 2021 01:00:00 +0000</pubDate>
      
      <guid>https://datascienceguts.com/2021/01/explaining-predictions-of-convolutional-neural-networks-with-sauron-package/</guid>
      <description>&lt;p&gt;Explainable Artificial Intelligence, or &lt;strong&gt;XAI&lt;/strong&gt; for short, is a set of tools that helps us understand and interpret complicated &lt;strong&gt;“black box”&lt;/strong&gt; machine and deep learning models and their predictions. In my previous post I showed you a sneak peek of my newest package called &lt;strong&gt;sauron&lt;/strong&gt;, which allows you to explain decisions of Convolutional Neural Networks. I am really glad to say that beta version of &lt;strong&gt;sauron&lt;/strong&gt; is finally here!&lt;/p&gt;
&lt;div id=&#34;sauron&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Sauron&lt;/h1&gt;
&lt;p&gt;With &lt;a href=&#34;https://github.com/maju116/sauron&#34;&gt;sauron&lt;/a&gt; you can use Explainable Artificial Intelligence (XAI) methods to understand predictions made by Neural Networks in &lt;code&gt;tensorflow/keras&lt;/code&gt;. For the time being only Convolutional Neural Networks are supported, but it will change in time.&lt;/p&gt;
&lt;p&gt;You can install the latest version of &lt;code&gt;sauron&lt;/code&gt; with &lt;code&gt;remotes&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;remotes::install_github(&amp;quot;maju116/sauron&amp;quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that in order to install &lt;code&gt;platypus&lt;/code&gt; you need to install &lt;code&gt;keras&lt;/code&gt; and &lt;code&gt;tensorflow&lt;/code&gt; packages and &lt;code&gt;Tensorflow&lt;/code&gt; version &lt;code&gt;&amp;gt;= 2.0.0&lt;/code&gt; (&lt;code&gt;Tensorflow 1.x&lt;/code&gt; will not be supported!)&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;quick-example-how-it-all-works&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Quick example: How it all works?&lt;/h1&gt;
&lt;p&gt;To generate any explanations you will have to create an object of class &lt;code&gt;CNNexplainer&lt;/code&gt;. To do this you will need two things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;tensorflow/keras model&lt;/li&gt;
&lt;li&gt;image preprocessing function (optional)&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(tidyverse)
library(sauron)

model &amp;lt;- application_xception()
preprocessing_function &amp;lt;- xception_preprocess_input

explainer &amp;lt;- CNNexplainer$new(model = model,
                              preprocessing_function = preprocessing_function,
                              id = &amp;quot;imagenet_xception&amp;quot;)
explainer
# &amp;lt;CNNexplainer&amp;gt;
#   Public:
#     clone: function (deep = FALSE) 
#     explain: function (input_imgs_paths, class_index = NULL, methods = c(&amp;quot;V&amp;quot;, 
#     id: imagenet_xception
#     initialize: function (model, preprocessing_function, id = NULL) 
#     model: function (object, ...) 
#     preprocessing_function: function (x) 
#     show_available_methods: function () 
#   Private:
#     available_methods: tbl_df, tbl, data.frame&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To see available XAI methods for the &lt;code&gt;CNNexplainer&lt;/code&gt; object use:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;explainer$show_available_methods()
# # A tibble: 8 x 2
#   method name                  
#   &amp;lt;chr&amp;gt;  &amp;lt;chr&amp;gt;                 
# 1 V      Vanilla gradient      
# 2 GI     Gradient x Input      
# 3 SG     SmoothGrad            
# 4 SGI    SmoothGrad x Input    
# 5 IG     Integrated Gradients  
# 6 GB     Guided Backpropagation
# 7 OCC    Occlusion Sensitivity 
# 8 GGC    Guided Grad-CAM&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can explain predictions using &lt;code&gt;explain&lt;/code&gt; method. You will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;paths to the images for which you want to generate explanations.&lt;/li&gt;
&lt;li&gt;class indexes for which the explanations should be generated (optional, if set to &lt;code&gt;NULL&lt;/code&gt; class that maximizes predicted probability will be found for each image).&lt;/li&gt;
&lt;li&gt;character vector with method names (optional, by default explainer will use all methods).&lt;/li&gt;
&lt;li&gt;batch size (optional, by default number of inserted images).&lt;/li&gt;
&lt;li&gt;additional arguments with settings for a specific method (optional).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As an output you will get an object of class &lt;code&gt;CNNexplanations&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;input_imgs_paths &amp;lt;- list.files(system.file(&amp;quot;extdata&amp;quot;, &amp;quot;images&amp;quot;, package = &amp;quot;sauron&amp;quot;), full.names = TRUE)

explanations &amp;lt;- explainer$explain(input_imgs_paths = input_imgs_paths,
                                  class_index = NULL,
                                  batch_size = 1,
                                  methods = c(&amp;quot;V&amp;quot;, &amp;quot;IG&amp;quot;,  &amp;quot;GB&amp;quot;, &amp;quot;GGC&amp;quot;),
                                  steps = 10, # Number of Integrated Gradients steps
                                  grayscale = FALSE # RGB or Gray gradients
)

explanations
# CNNexplanations object contains explanations for 3 images for 1 model.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can get raw explanations and metadata from &lt;code&gt;CNNexplanations&lt;/code&gt; object using:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;explanations$get_metadata()
# $multimodel_explanations
# [1] FALSE
# 
# $ids
# [1] &amp;quot;imagenet_xception&amp;quot;
# 
# $n_models
# [1] 1
# 
# $target_sizes
# $target_sizes[[1]]
# [1] 299 299   3
# 
# 
# $methods
# [1] &amp;quot;V&amp;quot;   &amp;quot;IG&amp;quot;  &amp;quot;GB&amp;quot;  &amp;quot;GGC&amp;quot;
# 
# $input_imgs_paths
# [1] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat_and_dog.jpg&amp;quot;
# [2] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat.jpeg&amp;quot;       
# [3] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/zebras.jpg&amp;quot;     
# 
# $n_imgs
# [1] 3

raw_explanations &amp;lt;- explanations$get_explanations()
str(raw_explanations)
# List of 1
#  $ imagenet_xception:List of 5
#   ..$ Input: num [1:3, 1:299, 1:299, 1:3] 147 134 170 147 134 168 144 134 170 144 ...
#   .. ..- attr(*, &amp;quot;dimnames&amp;quot;)=List of 4
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   ..$ V    : int [1:3, 1:299, 1:299, 1:3] 0 0 0 0 0 0 0 0 0 0 ...
#   .. ..- attr(*, &amp;quot;dimnames&amp;quot;)=List of 4
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   ..$ IG   : int [1:3, 1:299, 1:299, 1:3] 0 0 0 0 0 0 0 0 0 0 ...
#   .. ..- attr(*, &amp;quot;dimnames&amp;quot;)=List of 4
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   ..$ GB   : int [1:3, 1:299, 1:299, 1:3] 0 0 2 0 0 111 0 0 28 0 ...
#   .. ..- attr(*, &amp;quot;dimnames&amp;quot;)=List of 4
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   ..$ GGC  : num [1:3, 1:299, 1:299, 1] 7.13e-05 0.00 4.55e-04 7.13e-05 0.00 ...
#   .. ..- attr(*, &amp;quot;dimnames&amp;quot;)=List of 4
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL
#   .. .. ..$ : NULL&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To visualize and save generated explanations use:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;explanations$plot_and_save(combine_plots = TRUE, # Show all explanations side by side on one image?
                           output_path = NULL, # Where to save output(s)
                           plot = TRUE # Should output be plotted?
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2021-01-10-sauron-beta_files/figure-html/unnamed-chunk-6-1.png&#34; width=&#34;1152&#34; /&gt;&lt;/p&gt;
&lt;p&gt;If you want to compare two or more different models you can do it by combining &lt;code&gt;CNNexplainer&lt;/code&gt; objects into &lt;code&gt;CNNexplainers&lt;/code&gt; object:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;model2 &amp;lt;- application_densenet121()
preprocessing_function2 &amp;lt;- densenet_preprocess_input

explainer2 &amp;lt;- CNNexplainer$new(model = model2,
                               preprocessing_function = preprocessing_function2,
                               id = &amp;quot;imagenet_densenet121&amp;quot;)

model3 &amp;lt;- application_densenet201()
preprocessing_function3 &amp;lt;- densenet_preprocess_input

explainer3 &amp;lt;- CNNexplainer$new(model = model3,
                               preprocessing_function = preprocessing_function3,
                               id = &amp;quot;imagenet_densenet201&amp;quot;)

explainers &amp;lt;- CNNexplainers$new(explainer, explainer2, explainer3)

explanations123 &amp;lt;- explainers$explain(input_imgs_paths = input_imgs_paths,
                                      class_index = NULL,
                                      batch_size = 1,
                                      methods = c(&amp;quot;V&amp;quot;, &amp;quot;IG&amp;quot;,  &amp;quot;GB&amp;quot;, &amp;quot;GGC&amp;quot;),
                                      steps = 10,
                                      grayscale = FALSE
)

explanations123$get_metadata()
# $multimodel_explanations
# [1] TRUE
# 
# $ids
# [1] &amp;quot;imagenet_xception&amp;quot;    &amp;quot;imagenet_densenet121&amp;quot; &amp;quot;imagenet_densenet201&amp;quot;
# 
# $n_models
# [1] 3
# 
# $target_sizes
# $target_sizes[[1]]
# [1] 299 299   3
# 
# $target_sizes[[2]]
# [1] 224 224   3
# 
# $target_sizes[[3]]
# [1] 224 224   3
# 
# 
# $methods
# [1] &amp;quot;V&amp;quot;   &amp;quot;IG&amp;quot;  &amp;quot;GB&amp;quot;  &amp;quot;GGC&amp;quot;
# 
# $input_imgs_paths
# [1] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat_and_dog.jpg&amp;quot;
# [2] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat.jpeg&amp;quot;       
# [3] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/zebras.jpg&amp;quot;     
# 
# $n_imgs
# [1] 3

explanations123$plot_and_save(combine_plots = TRUE,
                              output_path = NULL,
                              plot = TRUE
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2021-01-10-sauron-beta_files/figure-html/unnamed-chunk-7-1.png&#34; width=&#34;1152&#34; /&gt;&lt;/p&gt;
&lt;p&gt;Alternatively if you already have some &lt;code&gt;CNNexplanations&lt;/code&gt; objects generated (for the same images and using same methods) you can combine them:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;explanations2 &amp;lt;- explainer2$explain(input_imgs_paths = input_imgs_paths,
                                    class_index = NULL,
                                    batch_size = 1,
                                    methods = c(&amp;quot;V&amp;quot;, &amp;quot;IG&amp;quot;,  &amp;quot;GB&amp;quot;, &amp;quot;GGC&amp;quot;),
                                    steps = 10,
                                    grayscale = FALSE
)

explanations3 &amp;lt;- explainer3$explain(input_imgs_paths = input_imgs_paths,
                                    class_index = NULL,
                                    batch_size = 1,
                                    methods = c(&amp;quot;V&amp;quot;, &amp;quot;IG&amp;quot;,  &amp;quot;GB&amp;quot;, &amp;quot;GGC&amp;quot;),
                                    steps = 10,
                                    grayscale = FALSE
)

explanations$combine(explanations2, explanations3)

explanations$get_metadata()
# $multimodel_explanations
# [1] TRUE
# 
# $ids
# [1] &amp;quot;imagenet_xception&amp;quot;    &amp;quot;imagenet_densenet121&amp;quot; &amp;quot;imagenet_densenet201&amp;quot;
# 
# $n_models
# [1] 3
# 
# $target_sizes
# $target_sizes[[1]]
# [1] 299 299   3
# 
# $target_sizes[[2]]
# [1] 224 224   3
# 
# $target_sizes[[3]]
# [1] 224 224   3
# 
# 
# $methods
# [1] &amp;quot;V&amp;quot;   &amp;quot;IG&amp;quot;  &amp;quot;GB&amp;quot;  &amp;quot;GGC&amp;quot;
# 
# $input_imgs_paths
# [1] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat_and_dog.jpg&amp;quot;
# [2] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/cat.jpeg&amp;quot;       
# [3] &amp;quot;/home/maju116/R/x86_64-pc-linux-gnu-library/4.0/sauron/extdata/images/zebras.jpg&amp;quot;     
# 
# $n_imgs
# [1] 3

explanations$plot_and_save(combine_plots = TRUE,
                           output_path = NULL,
                           plot = TRUE
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2021-01-10-sauron-beta_files/figure-html/unnamed-chunk-8-1.png&#34; width=&#34;1152&#34; /&gt;&lt;/p&gt;
&lt;/div&gt;
</description>
      
            <category>computer vision</category>
      
            <category>xai</category>
      
            <category>deep learning</category>
      
            <category>grad-cam</category>
      
            <category>guided backpropagation</category>
      
      
            <category>R</category>
      
    </item>
    
    <item>
      <title>Sneak peek into &#39;sauron&#39; package - XAI for Convolutional Neural Networks.</title>
      <link>https://datascienceguts.com/2020/11/sneak-peek-into-sauron-package-xai-for-convolutional-neural-networks/</link>
      <pubDate>Sun, 08 Nov 2020 01:00:00 +0000</pubDate>
      
      <guid>https://datascienceguts.com/2020/11/sneak-peek-into-sauron-package-xai-for-convolutional-neural-networks/</guid>
      <description>&lt;p&gt;Explainable Artificial Intelligence, or &lt;strong&gt;XAI&lt;/strong&gt; for short, is a set of tools that helps us understand and interpret complicated &lt;strong&gt;“black box”&lt;/strong&gt; machine and deep learning models and their predictions. Today I would like to show you a sneak peek of my newest package called &lt;strong&gt;sauron&lt;/strong&gt;, which allows you to explain decisions of Convolutional Neural Networks.&lt;/p&gt;
&lt;div id=&#34;what-exactly-does-cnn-see&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;What exactly does CNN see?&lt;/h1&gt;
&lt;p&gt;Let’s start with the basic. We’re gonna need a model, test images for which we want to generate explanations and image preprocessing function (if needed).&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(sauron)

input_imgs_paths &amp;lt;- list.files(system.file(&amp;quot;extdata&amp;quot;, &amp;quot;images&amp;quot;, package = &amp;quot;sauron&amp;quot;), full.names = TRUE)
model &amp;lt;- application_xception()
preprocessing_function &amp;lt;- xception_preprocess_input&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There’s a ton of different methods to explain CNNs, but for now with &lt;code&gt;sauron&lt;/code&gt; you have access to 6 &lt;strong&gt;gradient based&lt;/strong&gt; ones. You can check full list using &lt;code&gt;sauron_available_methods&lt;/code&gt; function:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;sauron_available_methods
# # A tibble: 6 x 2
#   method name                  
#   &amp;lt;chr&amp;gt;  &amp;lt;chr&amp;gt;                 
# 1 V      Vanilla gradient      
# 2 GI     Gradient x Input      
# 3 SG     SmoothGrad            
# 4 SGI    SmoothGrad x Input    
# 5 IG     Integrated Gradients  
# 6 GB     Guided Backpropagation&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Package is still in development so I won’t talk about the theory of those methods today. I will leave it for another post (or more probably multiple posts :smile: ).&lt;/p&gt;
&lt;p&gt;To generate any set of explanations simply use &lt;code&gt;generate_explanations&lt;/code&gt; function. Beside, images paths, model and optional preprocessing function you have to pass class indexes for which explanation should be made (&lt;code&gt;NULL&lt;/code&gt; means select class with highest probability for this image), some method specific arguments and if you want to generate grayscale o RGB explanation maps.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;explanations &amp;lt;- generate_explanations(
  model,
  input_imgs_paths,
  preprocessing_function,
  class_index = NULL,
  methods = sauron_available_methods$method,
  num_samples = 5, # SmoothGrad samples
  noise_sd = 0.1, # SmoothGrad noise standard divination
  steps = 10, # Integrated Gradients steps
  grayscale = FALSE)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we can plot our results:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;plot_explanations(explanations, FALSE)
# $Input&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-1.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $V&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-2.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $GI&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-3.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $SG&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-4.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $SGI&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-5.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $IG&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-6.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $GB&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-7.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $Input&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-8.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $V&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-9.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $GI&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-10.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $SG&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-11.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $SGI&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-12.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $IG&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-13.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# 
# $GB&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-11-08-sauron-sneak-peek_files/figure-html/plots-14.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;what-next&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;What next?&lt;/h1&gt;
&lt;p&gt;First of all, as I said in the beginning, &lt;code&gt;sauron&lt;/code&gt; is still in development and it should be available at the end of 2020. So if this topic is interesting for you be sure to visit my &lt;a href=&#34;https://github.com/maju116/&#34;&gt;github&lt;/a&gt; from time to time.&lt;/p&gt;
&lt;p&gt;Second of all I’m planning to expand &lt;code&gt;sauron&lt;/code&gt; capabilities. The first step will be to add methods like: &lt;strong&gt;Grad-CAM&lt;/strong&gt;, &lt;strong&gt;Guided Grad-CAM&lt;/strong&gt;, &lt;strong&gt;Occlusion&lt;/strong&gt; and &lt;strong&gt;Layer-wise Relevance Propagation (LRP)&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
</description>
      
            <category>computer vision</category>
      
            <category>xai</category>
      
            <category>deep learning</category>
      
      
            <category>R</category>
      
    </item>
    
    <item>
      <title>Nucleci segmentation in R with Platypus.</title>
      <link>https://datascienceguts.com/2020/10/nuclei-segmentation-in-r-with-platypus/</link>
      <pubDate>Fri, 16 Oct 2020 01:00:00 +0000</pubDate>
      
      <guid>https://datascienceguts.com/2020/10/nuclei-segmentation-in-r-with-platypus/</guid>
      <description>&lt;p&gt;In my &lt;a href=&#34;https://datascienceguts.com/2020/10/platypus-r-package-for-object-detection-and-image-segmentation/&#34;&gt;previous&lt;/a&gt; post I’ve introduced you to my latest project &lt;a href=&#34;https://github.com/maju116/platypus&#34;&gt;platypus&lt;/a&gt; - R package for &lt;strong&gt;object detection&lt;/strong&gt; and &lt;strong&gt;image segmentation&lt;/strong&gt;. This time I will go into more details and show you how to use it on biomedical data.&lt;/p&gt;
&lt;div id=&#34;the-data&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;The data&lt;/h1&gt;
&lt;p&gt;Today we will work on &lt;a href=&#34;https://www.kaggle.com/c/data-science-bowl-2018/data&#34;&gt;2018 Data Science Bowl&lt;/a&gt; dataset.
You can download images and masks directly form the url or using &lt;code&gt;Kagge API&lt;/code&gt; :&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kaggle competitions download -c data-science-bowl-2018&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After downloading the data, unpack them and move to preferred destination. For this example we will be interested only in &lt;code&gt;stage1_train&lt;/code&gt; and &lt;code&gt;stage1_test&lt;/code&gt; subdirectories, so you can delete other files if you want.&lt;/p&gt;
&lt;p&gt;Before we start, let’s investigate a little bit.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(tidyverse)
library(platypus)
library(abind)
library(here)

# Print current working directory
here()

# [1] &amp;quot;/home/maju116/Desktop/PROJECTS/Moje Projekty/platypus&amp;quot;

# Set directories with the data and models
data_path &amp;lt;- here(&amp;quot;examples/data/data-science-bowl-2018/&amp;quot;)
models_path &amp;lt;- here(&amp;quot;examples/models/&amp;quot;)

# Investigate one instance of data (image + masks)
sample_image_path &amp;lt;- here(&amp;quot;examples/data/data-science-bowl-2018/stage1_train/00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/&amp;quot;)

list.files(sample_image_path, full.names = TRUE) %&amp;gt;%
  set_names(basename(.)) %&amp;gt;%
  map(~ list.files(.))

# $images
# [1] &amp;quot;00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552.png&amp;quot;
# 
# $masks
#  [1] &amp;quot;07a9bf1d7594af2763c86e93f05d22c4d5181353c6d3ab30a345b908ffe5aadc.png&amp;quot;
#  [2] &amp;quot;0e548d0af63ab451616f082eb56bde13eb71f73dfda92a03fbe88ad42ebb4881.png&amp;quot;
#  [3] &amp;quot;0ea1f9e30124e4aef1407af239ff42fd6f5753c09b4c5cac5d08023c328d7f05.png&amp;quot;
#  [4] &amp;quot;0f5a3252d05ecdf453bdd5e6ad5322c454d8ec2d13ef0f0bf45a6f6db45b5639.png&amp;quot;
#  [5] &amp;quot;2c47735510ef91a11fde42b317829cee5fc04d05a797b90008803d7151951d58.png&amp;quot;
#  [6] &amp;quot;4afa39f2a05f9884a5ff030d678c6142379f99a5baaf4f1ba7835a639cb50751.png&amp;quot;
#  [7] &amp;quot;4bc58dbdefb2777392361d8b2d686b1cc14ca310e009b79763af46e853e6c6ac.png&amp;quot;
#  [8] &amp;quot;4e3b49fb14877b63704881a923365b68c1def111c58f23c66daa49fef4b632bf.png&amp;quot;
#  [9] &amp;quot;5522143fa8723b66b1e0b25331047e6ae6eeec664f7c8abeba687e0de0f9060a.png&amp;quot;
# [10] &amp;quot;58656859fb9c13741eda9bc753c3415b78d1135ee852a194944dee88ab70acf4.png&amp;quot;
# [11] &amp;quot;6442251746caac8fc255e6a22b41282ffcfabebadbd240ee0b604808ff9e3383.png&amp;quot;
# [12] &amp;quot;7ff04129f8b6d9aaf47e062eadce8b3fcff8b4a29ec5ad92bca926ac2b7263d2.png&amp;quot;
# [13] &amp;quot;8bbec3052bcec900455e8c7728d03facb46c880334bcc4fb0d1d066dd6c7c5d2.png&amp;quot;
# [14] &amp;quot;9576fe25f4a510f12eecbabfa2e0237b98d8c2622b9e13b9a960e2afe6da844e.png&amp;quot;
# [15] &amp;quot;95deddb72b845b1a1f81a282c86e666045da98344eaa2763d67e2ab80bc2e5c3.png&amp;quot;
# [16] &amp;quot;a1b0cdb21f341af17d86f23596df4f02a6b9c4e0d59a7f74aaf28b9e408a4e4c.png&amp;quot;
# [17] &amp;quot;aa154c70e0d82669e9e492309bd00536d2b0f6eeec1210014bbafbfc554b377c.png&amp;quot;
# [18] &amp;quot;acba6646e8250aab8865cd652dfaa7c56f643267ea2e774aee97dc2342d879d6.png&amp;quot;
# [19] &amp;quot;ae00049dc36a1e5ffafcdeadb44b18a9cd6dfd459ee302ab041337529bd41cf2.png&amp;quot;
# [20] &amp;quot;af4d6ff17fa7b41de146402e12b3bab1f1fe3c1e6f37da81a54e002168b1e7dd.png&amp;quot;
# [21] &amp;quot;b0cbc2c553f9c4ac2191395236f776143fb3a28fb77b81d3d258a2f45361ca89.png&amp;quot;
# [22] &amp;quot;b6fc3b5403de8f393ca368553566eaf03d5c07148539bc6141a486f1d185f677.png&amp;quot;
# [23] &amp;quot;be98de8a7ba7d5d733b1212ae957f37b5b69d0bf350b9a5a25ba4346c29e49f7.png&amp;quot;
# [24] &amp;quot;cb53899ef711bce04b209829c61958abdb50aa759f3f896eb7ed868021c22fb4.png&amp;quot;
# [25] &amp;quot;d5024b272cb39f9ef2753e2f31344f42dd17c0e2311c4927946bc5008d295d2e.png&amp;quot;
# [26] &amp;quot;f6eee5c69f54807923de1ceb1097fc3aa902a6b20d846f111e806988a4269ed0.png&amp;quot;
# [27] &amp;quot;ffae764df84788e8047c0942f55676c9663209f65da943814c6b3aca78d8e7f7.png&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see each image has its own directory, that has two subdirectories inside:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;images&lt;/strong&gt; - contains original image that will be the input of the neural network&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;masks&lt;/strong&gt; - contains &lt;strong&gt;one or more&lt;/strong&gt; segmentation masks. &lt;strong&gt;Segmentation mask&lt;/strong&gt; is simply telling us which pixel belongs to which class, and this is what we will try to predict.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For the modeling, beside &lt;strong&gt;train&lt;/strong&gt; and &lt;strong&gt;test&lt;/strong&gt; sets, we will also need a &lt;strong&gt;validation&lt;/strong&gt; set (No one is forcing you, but it’s a good practice!):&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;train_path &amp;lt;- here(&amp;quot;examples/data/data-science-bowl-2018/stage1_train/&amp;quot;)
test_path &amp;lt;- here(&amp;quot;examples/data/data-science-bowl-2018/stage1_test/&amp;quot;)
validation_path &amp;lt;- here(&amp;quot;examples/data/data-science-bowl-2018/stage1_validation/&amp;quot;)

if (!dir.exists(validation_path)) {
  dir.create(validation_path)
  # List train images
  train_samples &amp;lt;- list.files(train_path, full.names = TRUE)
  set.seed(1234)
  # Select 10% for validation
  validation_samples &amp;lt;- sample(train_samples, round(0.1 * length(train_samples)))
  validation_samples %&amp;gt;%
    walk(~ system(paste0(&amp;#39;mv &amp;quot;&amp;#39;, ., &amp;#39;&amp;quot; &amp;quot;&amp;#39;, validation_path, &amp;#39;&amp;quot;&amp;#39;)))
}&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div id=&#34;semantic-segmentation-with-u-net&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Semantic segmentation with U-Net&lt;/h1&gt;
&lt;p&gt;Since we now something about our data, we can now move to the modeling part. We will start by selecting the architecture of the neural network. In case of semantic segmentation there is a few different choices like &lt;strong&gt;U-Net&lt;/strong&gt;, &lt;strong&gt;Fast-FCN&lt;/strong&gt;, &lt;strong&gt;DeepLab&lt;/strong&gt; and many more. For the time being in the &lt;a href=&#34;https://github.com/maju116/platypus&#34;&gt;platypus&lt;/a&gt; package you have access only to the &lt;strong&gt;U-Net&lt;/strong&gt; architecture.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://i2.wp.com/neptune.ai/wp-content/uploads/U-net-architecture.png?ssl=1&#34; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;U-Net&lt;/strong&gt; was originally developed for biomedical data segmentation. As you can see in the picture above architecture is very similar to autoencoder and it looks like the letter &lt;strong&gt;U&lt;/strong&gt;, hence the name. Model is composed of 2 parts, and each part has some number of &lt;strong&gt;convolutional blocks&lt;/strong&gt; (3 in the image above). Number of blocks will be hyperparameter in our model.&lt;/p&gt;
&lt;p&gt;To build a &lt;strong&gt;U-Net&lt;/strong&gt; model in &lt;code&gt;platypus&lt;/code&gt; use &lt;code&gt;u_net&lt;/code&gt; function. You have to specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;number of convolutional blocks,&lt;/li&gt;
&lt;li&gt;input image height and width - must be in the form &lt;strong&gt;2^N&lt;/strong&gt;!,&lt;/li&gt;
&lt;li&gt;will input image be loaded as grayscale or RGB,&lt;/li&gt;
&lt;li&gt;number of classes - in our case we have only 2 (background and nuclei)&lt;/li&gt;
&lt;li&gt;additional arguments form CNN like number of filters, dropout rate&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;blocks &amp;lt;- 4 # Number of U-Net convolutional blocks
n_class &amp;lt;- 2 # Number of classes
net_h &amp;lt;- 256 # Must be in a form of 2^N
net_w &amp;lt;- 256 # Must be in a form of 2^N
grayscale &amp;lt;- FALSE # Will input image be in grayscale or RGB

DCB2018_u_net &amp;lt;- u_net(
  net_h = net_h,
  net_w = net_w,
  grayscale = grayscale,
  blocks = blocks,
  n_class = n_class,
  filters = 16,
  dropout = 0.1,
  batch_normalization = TRUE,
  kernel_initializer = &amp;quot;he_normal&amp;quot;
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After that it’s time to select &lt;strong&gt;loss&lt;/strong&gt; and additional metrics. Because semantic segmentation is in essence classification for each pixel instead of the whole image, you can use &lt;strong&gt;categorical cross-entropy&lt;/strong&gt; as a loss function and &lt;strong&gt;accuracy&lt;/strong&gt; as a metric. Other common choice, available in &lt;code&gt;platypus&lt;/code&gt;, would be &lt;a href=&#34;https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient&#34;&gt;&lt;strong&gt;dice coefficient/loss&lt;/strong&gt;&lt;/a&gt;. You can think of it as of a &lt;strong&gt;F1-metric&lt;/strong&gt; for semantic segmentation.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;DCB2018_u_net %&amp;gt;%
  compile(
    optimizer = optimizer_adam(lr = 1e-3),
    loss = loss_dice(),
    metrics = metric_dice_coeff()
  )&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next step will be data ingestion. As you remember we have a separate directory and multiple masks for each image. That’s not a problem for &lt;code&gt;platypus&lt;/code&gt;! You can ingest data using &lt;code&gt;segmentation_generator&lt;/code&gt; function. The first argument to specify is the directory with all the images and masks. To tell &lt;code&gt;platypus&lt;/code&gt; that it has to load images and masks from separate directories for each data sample specify argument &lt;code&gt;mode = &#34;nested_dirs&#34;&lt;/code&gt;. Additionally you can set images/masks subdirectories names using &lt;code&gt;subdirs&lt;/code&gt; argument. &lt;code&gt;platypus&lt;/code&gt; will automatically merge multiple masks for each image, but we have to tell him how to recognize which pixel belongs to which class. In the segmentation masks each class is recognized by a specific RGB value. In our case we have only black (R = 0, G = 0, B = 0) pixel for background and white (R = 255, G = 255, B = 255) pixels for nuclei. To tell &lt;code&gt;platypus&lt;/code&gt; how to recognize classes on segmentation masks use &lt;code&gt;colormap&lt;/code&gt; argument.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;binary_colormap

# [[1]]
# [1] 0 0 0
# 
# [[2]]
# [1] 255 255 255

train_DCB2018_generator &amp;lt;- segmentation_generator(
  path = train_path, # directory with images and masks
  mode = &amp;quot;nested_dirs&amp;quot;, # Each image with masks in separate folder
  colormap = binary_colormap,
  only_images = FALSE,
  net_h = net_h,
  net_w = net_w,
  grayscale = FALSE,
  scale = 1 / 255,
  batch_size = 32,
  shuffle = TRUE,
  subdirs = c(&amp;quot;images&amp;quot;, &amp;quot;masks&amp;quot;) # Names of subdirs with images and masks
)

# 603 images with corresponding masks detected!
# Set &amp;#39;steps_per_epoch&amp;#39; to: 19

validation_DCB2018_generator &amp;lt;- segmentation_generator(
  path = validation_path, # directory with images and masks
  mode = &amp;quot;nested_dirs&amp;quot;, # Each image with masks in separate folder
  colormap = binary_colormap,
  only_images = FALSE,
  net_h = net_h,
  net_w = net_w,
  grayscale = FALSE,
  scale = 1 / 255,
  batch_size = 32,
  shuffle = TRUE,
  subdirs = c(&amp;quot;images&amp;quot;, &amp;quot;masks&amp;quot;) # Names of subdirs with images and masks
)

# 67 images with corresponding masks detected!
# Set &amp;#39;steps_per_epoch&amp;#39; to: 3&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can now fit the model.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;history &amp;lt;- DCB2018_u_net %&amp;gt;%
  fit_generator(
    train_DCB2018_generator,
    epochs = 20,
    steps_per_epoch = 19,
    validation_data = validation_DCB2018_generator,
    validation_steps = 3,
    callbacks = list(callback_model_checkpoint(
      filepath = file.path(models_path, &amp;quot;DSB2018_w.hdf5&amp;quot;),
      save_best_only = TRUE,
      save_weights_only = TRUE,
      monitor = &amp;quot;dice_coeff&amp;quot;,
      mode = &amp;quot;max&amp;quot;,
      verbose = 1)
    )
  )&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And calculate predictions for the new images. Our model will return a 4-dimensional array (number of images, height, width, number of classes). Each pixel will have N probabilities, where N is number of classes. To transform raw predictions into segmentation map (by selecting class with max probability for each pixel) you can use &lt;code&gt;get_masks&lt;/code&gt; function.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;test_DCB2018_generator &amp;lt;- segmentation_generator(
  path = test_path,
  mode = &amp;quot;nested_dirs&amp;quot;,
  colormap = binary_colormap,
  only_images = TRUE,
  net_h = net_h,
  net_w = net_w,
  grayscale = FALSE,
  scale = 1 / 255,
  batch_size = 32,
  shuffle = FALSE,
  subdirs = c(&amp;quot;images&amp;quot;, &amp;quot;masks&amp;quot;)
)

# 65 images detected!
# Set &amp;#39;steps_per_epoch&amp;#39; to: 3

test_preds &amp;lt;- predict_generator(DCB2018_u_net, test_DCB2018_generator, 3)
dim(test_preds)

# [1]  65 256 256   2

test_masks &amp;lt;- get_masks(test_preds, binary_colormap)
dim(test_masks[[1]])

# [1] 256 256   3&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To visualize predicted masks with the original images you can use &lt;code&gt;plot_masks&lt;/code&gt; function.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;test_imgs_paths &amp;lt;- create_images_masks_paths(test_path, &amp;quot;nested_dirs&amp;quot;, FALSE, c(&amp;quot;images&amp;quot;, &amp;quot;masks&amp;quot;), &amp;quot;;&amp;quot;)$images_paths

plot_masks(
  images_paths = test_imgs_paths[1:4],
  masks = test_masks[1:4],
  labels = c(&amp;quot;background&amp;quot;, &amp;quot;nuclei&amp;quot;),
  colormap = binary_colormap
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/images/nuclei_segmentation/nuclei_segmentation1.png&#34; alt=&#34;x&#34; /&gt;
&lt;img src=&#34;https://datascienceguts.com/images/nuclei_segmentation/nuclei_segmentation2.png&#34; alt=&#34;x&#34; /&gt;
&lt;img src=&#34;https://datascienceguts.com/images/nuclei_segmentation/nuclei_segmentation3.png&#34; alt=&#34;x&#34; /&gt;
&lt;img src=&#34;https://datascienceguts.com/images/nuclei_segmentation/nuclei_segmentation4.png&#34; alt=&#34;x&#34; /&gt;&lt;/p&gt;
&lt;/div&gt;
</description>
      
            <category>computer vision</category>
      
            <category>u-net</category>
      
            <category>semantic segmentation</category>
      
            <category>deep learning</category>
      
      
            <category>R</category>
      
    </item>
    
    <item>
      <title>Platypus - R package for object detection and image segmentation.</title>
      <link>https://datascienceguts.com/2020/10/platypus-r-package-for-object-detection-and-image-segmentation/</link>
      <pubDate>Tue, 06 Oct 2020 01:00:00 +0000</pubDate>
      
      <guid>https://datascienceguts.com/2020/10/platypus-r-package-for-object-detection-and-image-segmentation/</guid>
      <description>&lt;div id=&#34;computer-vision-tasks&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Computer Vision tasks&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;, or &lt;strong&gt;CV&lt;/strong&gt; for short, is a field of computer science focused on development of techniques that will help computers understand the content of digital images in a way similar to human understanding. There’s a lot of different computer vision tasks, but today I want to focus only on four basic ones.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://miro.medium.com/max/875/0*-cQJQHHUcUBb8V5-.png&#34;
     alt=&#34;Computer Vision tasks&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Basic Computer Vision tasks:&lt;/p&gt;
&lt;ol style=&#34;list-style-type: decimal&#34;&gt;
&lt;li&gt;&lt;strong&gt;Image classification&lt;/strong&gt; - in this task we want to compute the probability (or probabilities) that the input image is in a particular &lt;strong&gt;class&lt;/strong&gt;. It could be performed with &lt;strong&gt;Convolutional Neural Networks&lt;/strong&gt; using &lt;code&gt;keras&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Semantic segmentation&lt;/strong&gt; - very similar to image classification, but instead of classifying the whole image, we want to classify &lt;strong&gt;each pixel&lt;/strong&gt; of this image. Note that we are not saying anything about location of the object.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Object detection&lt;/strong&gt; - we want to classify and locate objects on the input image. Object localization is typically indicated by specifying a tightly cropped &lt;strong&gt;bounding box&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Instance segmentation&lt;/strong&gt; - it’s a combination of semantic segmentation and object detection. Like in semantic segmentation we want to classify each pixel to a different class, but we also want to distinguish between different objects of the same class.&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div id=&#34;platypus&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Platypus&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/maju116/platypus&#34;&gt;&lt;code&gt;platypus&lt;/code&gt;&lt;/a&gt; is an R package for object detection and semantic segmentation. Currently using &lt;code&gt;platypus&lt;/code&gt; you can perform:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;multi-class semantic segmentation using &lt;strong&gt;U-Net&lt;/strong&gt; architecture&lt;/li&gt;
&lt;li&gt;multi-class object detection using &lt;strong&gt;YOLOv3&lt;/strong&gt; architecture&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can install the latest version of &lt;code&gt;platypus&lt;/code&gt; with &lt;code&gt;remotes&lt;/code&gt; package:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;remotes::install_github(&amp;quot;maju116/platypus&amp;quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that in order to install &lt;code&gt;platypus&lt;/code&gt; you need to install &lt;code&gt;keras&lt;/code&gt; and &lt;code&gt;tensorflow&lt;/code&gt; packages and &lt;code&gt;Tensorflow&lt;/code&gt; version &lt;code&gt;&amp;gt;= 2.0.0&lt;/code&gt; (&lt;code&gt;Tensorflow 1.x&lt;/code&gt; will not be supported!)&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;quick-example-yolov3-bounding-box-prediction-with-pre-trained-coco-weights&#34; class=&#34;section level1&#34;&gt;
&lt;h1&gt;Quick example: YOLOv3 bounding box prediction with pre-trained COCO weights&lt;/h1&gt;
&lt;p&gt;To create &lt;code&gt;YOLOv3&lt;/code&gt; architecture use:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(tidyverse)
library(platypus)
library(abind)

test_yolo &amp;lt;- yolo3(
  net_h = 416, # Input image height. Must be divisible by 32
  net_w = 416, # Input image width. Must be divisible by 32
  grayscale = FALSE, # Should images be loaded as grayscale or RGB
  n_class = 80, # Number of object classes (80 for COCO dataset)
  anchors = coco_anchors # Anchor boxes
)

test_yolo
#&amp;gt; Model
#&amp;gt; Model: &amp;quot;yolo3&amp;quot;
#&amp;gt; ________________________________________________________________________________
#&amp;gt; Layer (type)              Output Shape      Param #  Connected to               
#&amp;gt; ================================================================================
#&amp;gt; input_img (InputLayer)    [(None, 416, 416, 0                                   
#&amp;gt; ________________________________________________________________________________
#&amp;gt; darknet53 (Model)         multiple          40620640 input_img[0][0]            
#&amp;gt; ________________________________________________________________________________
#&amp;gt; yolo3_conv1 (Model)       (None, 13, 13, 51 11024384 darknet53[1][2]            
#&amp;gt; ________________________________________________________________________________
#&amp;gt; yolo3_conv2 (Model)       (None, 26, 26, 25 2957312  yolo3_conv1[1][0]          
#&amp;gt;                                                      darknet53[1][1]            
#&amp;gt; ________________________________________________________________________________
#&amp;gt; yolo3_conv3 (Model)       (None, 52, 52, 12 741376   yolo3_conv2[1][0]          
#&amp;gt;                                                      darknet53[1][0]            
#&amp;gt; ________________________________________________________________________________
#&amp;gt; grid1 (Model)             (None, 13, 13, 3, 4984063  yolo3_conv1[1][0]          
#&amp;gt; ________________________________________________________________________________
#&amp;gt; grid2 (Model)             (None, 26, 26, 3, 1312511  yolo3_conv2[1][0]          
#&amp;gt; ________________________________________________________________________________
#&amp;gt; grid3 (Model)             (None, 52, 52, 3, 361471   yolo3_conv3[1][0]          
#&amp;gt; ================================================================================
#&amp;gt; Total params: 62,001,757
#&amp;gt; Trainable params: 61,949,149
#&amp;gt; Non-trainable params: 52,608
#&amp;gt; ________________________________________________________________________________&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now load &lt;a href=&#34;https://pjreddie.com/darknet/yolo/&#34;&gt;YOLOv3 Darknet&lt;/a&gt; weights trained on &lt;a href=&#34;https://cocodataset.org/#home&#34;&gt;COCO dataset&lt;/a&gt;. Download pre-trained weights from &lt;a href=&#34;https://pjreddie.com/media/files/yolov3.weights&#34;&gt;here&lt;/a&gt; and run:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;test_yolo %&amp;gt;% load_darknet_weights(&amp;quot;yolov3.weights&amp;quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Calculate predictions for new images:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;test_img_paths &amp;lt;- list.files(system.file(&amp;quot;extdata&amp;quot;, &amp;quot;images&amp;quot;, package = &amp;quot;platypus&amp;quot;), full.names = TRUE, pattern = &amp;quot;coco&amp;quot;)
test_imgs &amp;lt;- test_img_paths %&amp;gt;%
  map(~ {
    image_load(., target_size = c(416, 416), grayscale = FALSE) %&amp;gt;%
      image_to_array() %&amp;gt;%
      `/`(255)
  }) %&amp;gt;%
  abind(along = 4) %&amp;gt;%
  aperm(c(4, 1:3))
test_preds &amp;lt;- test_yolo %&amp;gt;% predict(test_imgs)

str(test_preds)
#&amp;gt; List of 3
#&amp;gt;  $ : num [1:2, 1:13, 1:13, 1:3, 1:85] 0.294 0.478 0.371 1.459 0.421 ...
#&amp;gt;  $ : num [1:2, 1:26, 1:26, 1:3, 1:85] -0.214 1.093 -0.092 2.034 -0.286 ...
#&amp;gt;  $ : num [1:2, 1:52, 1:52, 1:3, 1:85] 0.242 -0.751 0.638 -2.419 -0.282 ...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Transform raw predictions into bounding boxes:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;test_boxes &amp;lt;- get_boxes(
  preds = test_preds, # Raw predictions form YOLOv3 model
  anchors = coco_anchors, # Anchor boxes
  labels = coco_labels, # Class labels
  obj_threshold = 0.6, # Object threshold
  nms = TRUE, # Should non-max suppression be applied
  nms_threshold = 0.6, # Non-max suppression threshold
  correct_hw = FALSE # Should height and width of bounding boxes be corrected to image height and width
)

test_boxes
#&amp;gt; [[1]]
#&amp;gt; # A tibble: 8 x 7
#&amp;gt;    xmin  ymin  xmax  ymax p_obj label_id label 
#&amp;gt;   &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt;    &amp;lt;int&amp;gt; &amp;lt;chr&amp;gt; 
#&amp;gt; 1 0.207 0.718 0.236 0.865 0.951        1 person
#&amp;gt; 2 0.812 0.758 0.846 0.868 0.959        1 person
#&amp;gt; 3 0.349 0.702 0.492 0.884 1.00         3 car   
#&amp;gt; 4 0.484 0.543 0.498 0.558 0.837        3 car   
#&amp;gt; 5 0.502 0.543 0.515 0.556 0.821        3 car   
#&amp;gt; 6 0.439 0.604 0.469 0.643 0.842        3 car   
#&amp;gt; 7 0.541 0.554 0.667 0.809 0.999        6 bus   
#&amp;gt; 8 0.534 0.570 0.675 0.819 0.954        7 train 
#&amp;gt; 
#&amp;gt; [[2]]
#&amp;gt; # A tibble: 3 x 7
#&amp;gt;     xmin   ymin  xmax  ymax p_obj label_id label
#&amp;gt;    &amp;lt;dbl&amp;gt;  &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt; &amp;lt;dbl&amp;gt;    &amp;lt;int&amp;gt; &amp;lt;chr&amp;gt;
#&amp;gt; 1 0.0236 0.0705 0.454 0.909 1.00        23 zebra
#&amp;gt; 2 0.290  0.206  0.729 0.901 0.997       23 zebra
#&amp;gt; 3 0.486  0.407  0.848 0.928 1.00        23 zebra&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Plot / save images:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;plot_boxes(
  images_paths = test_img_paths, # Images paths
  boxes = test_boxes, # Bounding boxes
  correct_hw = TRUE, # Should height and width of bounding boxes be corrected to image height and width
  labels = coco_labels # Class labels
)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-10-04-platypus_files/figure-html/unnamed-chunk-7-1.png&#34; width=&#34;672&#34; /&gt;&lt;img src=&#34;https://datascienceguts.com/post/2020-10-04-platypus_files/figure-html/unnamed-chunk-7-2.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;p&gt;For more details and examples on custom datasets visit &lt;a href=&#34;https://github.com/maju116/platypus&#34;&gt;platypus&lt;/a&gt; page and check out the &lt;a href=&#34;https://datascienceguts.com/&#34;&gt;latest posts&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
</description>
      
            <category>computer vision</category>
      
            <category>yolo</category>
      
            <category>u-net</category>
      
            <category>object detection</category>
      
            <category>semantic segmentation</category>
      
            <category>deep learning</category>
      
      
            <category>R</category>
      
    </item>
    
  </channel>
</rss>