{"id":5708,"date":"2017-11-15T11:25:20","date_gmt":"2017-11-15T19:25:20","guid":{"rendered":"https:\/\/learnopencv.com\/?p=5708"},"modified":"2023-10-03T01:27:58","modified_gmt":"2023-10-03T08:27:58","slug":"understanding-autoencoders-using-tensorflow-python","status":"publish","type":"post","link":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/","title":{"rendered":"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)"},"content":{"rendered":"\n<p>In this article, we will learn about autoencoders in deep learning. We will show a practical implementation of using a Denoising Autoencoder on the <a href=\"https:\/\/en.wikipedia.org\/wiki\/MNIST_database\" target=\"_blank\" rel=\"noreferrer noopener\">MNIST<\/a> handwritten digits dataset as an example. In addition, we are sharing an implementation of the idea in Tensorflow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. What is An Autoencoder?<\/h2>\n\n\n\n<p>An autoencoder is an unsupervised machine-learning algorithm that takes an image as input and reconstructs it using fewer bits. That may sound like image compression, but the biggest difference between an autoencoder and a general purpose image compression algorithm is that in the case of autoencoders, the compression is achieved by learning on a training set of data. While reasonable compression is achieved when an image is similar to the training set used, autoencoders are poor general-purpose image compressors; JPEG compression will do vastly better.<\/p>\n\n\n\n<p>Autoencoders are similar in spirit to dimensionality reduction techniques like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Principal_component_analysis\">principal component analysis<\/a>. They create a space where the essential parts of the data are preserved while non-essential ( or noisy ) parts are removed.<\/p>\n\n\n\n<p>There are two parts to an autoencoder<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Encoder<\/strong>: This is the part of the network that compresses the input into a fewer number of bits. The space represented by these fewer number of bits is called the &#8220;latent-space&#8221; and the point of maximum compression is called the bottleneck. These compressed bits that represent the original input are together called an &#8220;encoding&#8221; of the input.<\/li>\n\n\n\n<li><strong>Decoder<\/strong>: This is the part of the network that reconstructs the input image using the encoding of the image.<\/li>\n<\/ol>\n\n\n\n<p>Let&#8217;s look at an example to understand the concept better.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><a href=\"\/wp-content\/uploads\/2017\/11\/AutoEncoder.png\"><img decoding=\"async\" width=\"299\" height=\"168\" src=\"\/wp-content\/uploads\/2017\/11\/AutoEncoder.png\" alt=\"Autoencoder neural network architecture.\" class=\"wp-image-5779\"\/><\/a><figcaption class=\"wp-element-caption\">Figure 1: 2-layer autoencoder<\/figcaption><\/figure>\n\n\n\n\t\t<div data-elementor-type=\"section\" data-elementor-id=\"33489\" class=\"elementor elementor-33489\" data-elementor-post-type=\"elementor_library\">\n\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-aeedff9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"aeedff9\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-a4845f2\" data-id=\"a4845f2\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-873d2bf elementor-widget elementor-widget-elementskit-testimonial\" data-id=\"873d2bf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"elementskit-testimonial.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div class=\"ekit-wid-con\" ><div class=\"elementskit-testimonial-slider arrow_inside slider-dotted\" data-config=\"{&quot;rtl&quot;:false,&quot;arrows&quot;:false,&quot;dots&quot;:true,&quot;pauseOnHover&quot;:true,&quot;autoplay&quot;:true,&quot;speed&quot;:1500,&quot;slidesPerGroup&quot;:1,&quot;slidesPerView&quot;:2,&quot;loop&quot;:true,&quot;spaceBetween&quot;:15,&quot;breakpoints&quot;:{&quot;320&quot;:{&quot;slidesPerView&quot;:1,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:10},&quot;768&quot;:{&quot;slidesPerView&quot;:2,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:10},&quot;1024&quot;:{&quot;slidesPerView&quot;:2,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:15}}}\">\n\t<div class=\"ekit-main-swiper swiper\">\n\t\t<div class=\"swiper-wrapper\">\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-opencv-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=autoencoders-explored-understanding-and-implementing-denoising-autoencoders-with-tensorflow-python\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2023\/05\/All-CV-Courses-Thumbnails-3.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 100K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  3 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free OpenCV Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-tensorflow-keras-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=autoencoders-explored-understanding-and-implementing-denoising-autoencoders-with-tensorflow-python\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2023\/05\/Free-TF-Bootcamp_4.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 15K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  3 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free TensorFlow Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-pytorch-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=autoencoders-explored-understanding-and-implementing-denoising-autoencoders-with-tensorflow-python\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2025\/02\/PyTorch_Bootcamp.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 10K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  8 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free  PyTorch Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div class=\"swiper-pagination\"><\/div>\n\t\t\n\t\t\t<\/div>\n<\/div>\n<\/div>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-756ac89 elementor-align-center elementor-widget elementor-widget-button\" data-id=\"756ac89\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/opencv.org\/university\/free-courses\/?utm_source=lopcv&#038;utm_medium=blog\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t<span class=\"elementor-button-icon\">\n\t\t\t\t<i aria-hidden=\"true\" class=\"fas fa-chevron-right\"><\/i>\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">View all AI Free Courses<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t \n\n\n\n<p>In the above picture, we show a vanilla autoencoder &#8212; a 2-layer autoencoder with one hidden layer. The input and output layers have the same number of neurons. We feed five real values into the autoencoder compressed by the encoder into three real values at the bottleneck (middle layer). Using these three real values, the decoder tries reconstructing the five real values we had fed as input to the network.<\/p>\n\n\n\n<p>In practice, there are a far larger number of hidden layers in between the input and the output.<\/p>\n\n\n\n<p>There are various kinds of autoencoders like a sparse autoencoder, <a href=\"https:\/\/learnopencv.com\/variational-autoencoder-in-tensorflow\/\" target=\"_blank\" rel=\"noopener\" title=\"\">variational autoencoder<\/a>, and denoising autoencoder. In this post, we will learn about a denoising autoencoder.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. Denoising Autoencoders<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full wp-image-6009 aligncenter\"><img decoding=\"async\" width=\"678\" height=\"247\" src=\"\/wp-content\/uploads\/2017\/11\/denoising-example.png\" alt=\"Denoising autoencoder\" class=\"wp-image-6009\"><figcaption>Figure 2: Denoising autoencoder<\/figcaption><\/figure>\n\n\n\n<p>The idea behind a denoising autoencoder is to learn a representation (latent space) that is robust to noise. We add noise to an image and then feed this noisy image as an input to our network. The encoder part of the autoencoder transforms the image into a different space that preserves the handwritten digits but removes the noise. As we will see later, the original image is 28 x 28 x 1 image, and the transformed image is 7 x 7 x 32. You can think of the 7 x 7 x 32 image as a 7 x 7 image with 32 color channels.<\/p>\n\n\n\n<p>The decoder part of the network then reconstructs the original image from this 7 x 7 x 32 image, and voila the noise is gone!<\/p>\n\n\n\n<p>How does this magic happen?<\/p>\n\n\n\n<p>During training, we define a loss (cost function) to minimize the difference between the reconstructed image and the original noise-free image. In other words, we learn a 7 x 7 x 32 space that is noise-free.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. Implementation of Denoising Autoencoder<\/h2>\n\n\n\n<p>This implementation is inspired by this excellent post <a href=\"https:\/\/blog.keras.io\/building-autoencoders-in-keras.html\" target=\"_blank\" rel=\"noreferrer noopener\">Building Autoencoders in Keras<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.1 The Network<\/h3>\n\n\n\n<p>The images are matrices of size 28 x 28. We reshape the image to be of size 28 x 28 x 1, convert the resized image matrix to an array, rescale it between 0 and 1, and feed this as an input to the network. The encoder transforms the 28 x 28 x 1 image to a 7 x 7 x 32 image. You can think of this 7 x 7 x 32 image as a point in a 1568 ( because 7 x 7 x 32 = 1568 ) dimensional space. This 1568 dimensional space is called the bottleneck or the latent space. The architecture is graphically shown below.<\/p>\n\n\n\n<div class=\"wp-block-image size-full wp-image-6121\"><figure><img decoding=\"async\" width=\"192\" height=\"434\" src=\"\/wp-content\/uploads\/2017\/11\/encoder-block-noise-2.png\" alt=\"Encoder of the autoencoder network\" class=\"wp-image-6121 aligncenter\"><figcaption class=\"aligncenter\">Figure 3: Architecture of encoder model<\/figcaption><\/figure><\/div>\n\n\n\n<p>The decoder does the exact opposite of an encoder; it transforms this 1568 dimensional vector back to a 28 x 28 x 1 image. We call this output image a &#8220;reconstruction&#8221; of the original image. The structure of the decoder is shown below.<\/p>\n\n\n\n<div class=\"wp-block-image size-full wp-image-6122\"><figure><img decoding=\"async\" width=\"184\" height=\"453\" src=\"\/wp-content\/uploads\/2017\/11\/decoder-noise-diagram-3.png\" alt=\"Decoder of the autoencoder model\" class=\"wp-image-6122 aligncenter\"><figcaption class=\"aligncenter\">Figure 4: Architecture of decoder model<\/figcaption><\/figure><\/div>\n\n\n\n<p>Let&#8217;s dive into the implementation of an autoencoder using TensorFlow.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.2 Encoder<\/h3>\n\n\n\n<p>The encoder has two convolutional layers and two max pooling layers. Both Convolution layer-1 and Convolution layer-2 have 32-3 x 3 filters. There are two max-pooling layers, each of size 2 x 2.<\/p>\n\n\n\n<div class=\"download-code\">\r\n<strong> Download Code<\/strong>\r\nTo easily follow along this tutorial, please download code by clicking on the button below. It's FREE!\r\n\r\n<div style=\"text-align: left;\"><a class=\"elementorDownloadPopup subscribe-btn middle-download-code\"  href=\"javascript:void(0)\">Download Code<\/a><\/div>\r\n\r\n<\/div>\r\n\t\t<div data-elementor-type=\"section\" data-elementor-id=\"50465\" class=\"elementor elementor-50465\" data-elementor-post-type=\"elementor_library\">\n\t\t\t\t\t<section data-dce-background-color=\"#006CFF\" class=\"elementor-section elementor-top-section elementor-element elementor-element-1fb1fc2 download-code-bar elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"1fb1fc2\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-387c24a\" data-id=\"387c24a\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7e0677f elementor-widget__width-auto dce_masking-none elementor-widget elementor-widget-image\" data-id=\"7e0677f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2021\/10\/cropped-favicon-512x512-1-150x150.png\" class=\"attachment-thumbnail size-thumbnail wp-image-38827\" alt=\"\" title=\"cropped-favicon-512x512.png \u2013 LearnOpenCV&nbsp;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5946901 elementor-widget__width-auto elementor-widget elementor-widget-text-editor\" data-id=\"5946901\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<a class=\"elementorDownloadPopup top-bar-download\" style=\"color: #fff;\"> Click here to download the source code to this post <\/a>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nencoder = Sequential(&#x5B;\n    # convolution\n    Conv2D(\n        filters=32,\n        kernel_size=(3,3),\n        strides=(1,1),\n        padding='SAME',\n        use_bias=True,\n        activation=lrelu,\n        name='conv1'\n    ),\n    # the input size is 28x28x32\n    MaxPooling2D(\n        pool_size=(2,2),\n        strides=(2,2),\n        name='pool1'\n    ),\n    # the input size is 14x14x32\n    Conv2D(\n        filters=32,\n        kernel_size=(3,3),\n        strides=(1,1),\n        padding='SAME',\n        use_bias=True,\n        activation=lrelu,\n        name='conv2'\n    ),\n    # the input size is 14x14x32\n    MaxPooling2D(\n        pool_size=(2,2),\n        strides=(2,2),\n        name='encoding'\n    )\n    # the output size is 7x7x32\n])\n<\/pre><\/div>\n\n\n<div class=\"wp-block-image size-full wp-image-6003\"><figure><img decoding=\"async\" width=\"835\" height=\"112\" src=\"\/wp-content\/uploads\/2017\/11\/encoder-diagram.png\" alt=\"Encoder block diagram\" class=\"wp-image-6003 aligncenter\"><figcaption class=\"aligncenter\">Figure 5: Encoder block diagram<\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">3.3 Decoder<\/h3>\n\n\n\n<p>The decoder has two Conv2d_transpose layers, two Convolution layers, and one Sigmoid activation function. Conv2d_transpose is for upsampling, which is opposite to the role of a convolution layer. The Conv2d_transpose layer upsamples the compressed image twice each time we use it.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndecoder = Sequential(&#x5B;\n    Conv2D(\n        filters=32,\n        kernel_size=(3,3),\n        strides=(1,1),\n        name='conv3',\n        padding='SAME',\n        use_bias=True,\n        activation=lrelu\n    ),\n    # updampling, the input size is 7x7x32\n    Conv2DTranspose(\n        filters=32,\n        kernel_size=3,\n        padding='same',\n        strides=2,\n        name='upsample1'\n    ),\n    # upsampling, the input size is 14x14x32\n    Conv2DTranspose(\n        filters=32,\n        kernel_size=3,\n        padding='same',\n        strides=2,\n        name='upsample2'\n    ),\n    # the input size is 28x28x32\n    Conv2D(\n        filters=1,\n        kernel_size=(3,3),\n        strides=(1,1),\n        name='logits',\n        padding='SAME',\n        use_bias=True\n    )    \n])\n<\/pre><\/div>\n\n\n<figure class=\"wp-block-image aligncenter size-full wp-image-6047\"><img decoding=\"async\" width=\"709\" height=\"86\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/decoder-block.png\" alt=\"Decoder block diagram of the denoising autoencoder.\" class=\"wp-image-6047\"\/><figcaption class=\"wp-element-caption\">Figure  6: Decoder Block Diagram<\/figcaption><\/figure>\n\n\n\n<p>The resultant encoder-decoder model class is represented as:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# model class definition\nclass EncoderDecoderModel(Model):\n    def __init__(self, is_sigmoid=False):\n        super(EncoderDecoderModel, self).__init__()\n        # assign encoder sequence\n        self._encoder = encoder\n        # assign decoder sequence \n        self._decoder = decoder\n        self._is_sigmoid = is_sigmoid\n        \n    # forward pass\n    def call(self, x):\n        x = self._encoder(x)\n        decoded = self._decoder(x)\n        if self._is_sigmoid:\n            decoded = tf.keras.activations.sigmoid(decoded)\n        return decoded\n<\/pre><\/div>\n\n\n<p>Finally, we calculate the loss of the output using&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Cross_entropy\" target=\"_blank\" rel=\"noreferrer noopener\">cross-entropy<\/a> loss function and use&nbsp;<a href=\"https:\/\/machinelearningmastery.com\/adam-optimization-algorithm-for-deep-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">Adam optimizer<\/a> to optimize our loss function.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.4 Why do we use a leaky ReLU and not a ReLU as an activation function?<\/h3>\n\n\n\n<p>We want gradients to flow while we backpropagate through the network. We stack many layers in a system in which there are some neurons whose value drop to zero or become negative. Using a ReLU as an activation function clips the negative values to zero and in the backward pass, the gradients do not flow through those neurons where the values become zero. Because of this the weights do not get updated, and the network stops learning for those values. So using ReLU is not always a good idea. However, we encourage you to change the activation function to ReLU and see the difference.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# define leaky ReLU function\ndef lrelu(x, alpha=0.1):\n    return tf.math.maximum(alpha*x, x)\n<\/pre><\/div>\n\n\n<p>Therefore, we use a leaky ReLU which instead of clipping the negative values to zero, cuts them to a specific amount based on a hyperparameter alpha. This ensures that the network learns something even when the pixel value is below zero.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.5 Load the data<\/h3>\n\n\n\n<p>Once the architecture has been defined, we load the training and validation data.<\/p>\n\n\n\n<p>As shown below, Tensorflow allows us to easily load the MNIST data. The training and testing data loaded is stored in variables <code>train_imgs<\/code> and <code>test_imgs<\/code> respectively. Since it&#8217;s an unsupervised task we do not care about the labels.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# load mnist dataset\n(train_imgs, train_labels), (test_imgs, test_labels) = tf.keras.datasets.mnist.load_data()\n\n# fit image pixel values from 0 to 1\ntrain_imgs, test_imgs = train_imgs \/ 255.0, test_imgs \/ 255.0 \n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">3.6 Data Analysis<\/h3>\n\n\n\n<p>Before training a <a href=\"https:\/\/learnopencv.com\/neural-networks-a-30000-feet-view-for-beginners\/\" target=\"_blank\" rel=\"noopener\" title=\"\">neural network<\/a>, it is always a good idea to do a sanity check on the data.<\/p>\n\n\n\n<p>Let\u2019s see how the data looks like. The data consists of handwritten numbers ranging from 0 to 9, along with their ground truth labels. It has 55,000 train samples and 10,000 test samples. Each sample is a 28\u00d728 grayscale image. Let&#8217;s view the data details:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# check data array shapes:\nprint(\"Size of train images: {}, Number of train images: {}\".format(train_imgs.shape&#x5B;-2:], train_imgs.shape&#x5B;0]))\nprint(\"Size of test images: {}, Number of test images: {}\".format(test_imgs.shape&#x5B;-2:], test_imgs.shape&#x5B;0]))\n<\/pre><\/div>\n\n\n<p>The <strong>output<\/strong> is:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; gutter: false; title: ; notranslate\" title=\"\">\nSize of train images: (28, 28), Number of train images: 60000\nSize of test images: (28, 28), Number of test images: 10000\n<\/pre><\/div>\n\n\n<p>The visualization of train and test image examples:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# plot image example from training images\nplt.imshow(train_imgs&#x5B;1], cmap='Greys')\nplt.show()\n\n# plot image example from test images\nplt.imshow(test_imgs&#x5B;0], cmap='Greys')\nplt.show()\nplt.close()\n<\/pre><\/div>\n\n\n<p><strong>Output:<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure><img decoding=\"async\" width=\"249\" height=\"505\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2020\/06\/mnist_original.png\" alt=\"Train and test images from the MNIST dataset\" class=\"wp-image-17739 aligncenter\"><figcaption class=\"aligncenter\">Figure 7: Train and test MNIST images<\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">3.7 Preprocessing the data<\/h3>\n\n\n\n<p>The images are grayscale and the pixel values range from 0 to 255. We apply the following preprocessing to the data before feeding it to the network.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Add a new dimension to the train and test images, which will be fed into the network.<\/li>\n<\/ol>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# prepare training reference images: add new dimension\ntrain_imgs_data = train_imgs&#x5B;..., tf.newaxis]\n\n# prepare test reference images: add new dimension\ntest_imgs_data = test_imgs&#x5B;..., tf.newaxis]\n<\/pre><\/div>\n\n\n<ol class=\"wp-block-list\">\n<li>Add noise to both train and test images which we then feed into the network. The noise factor is a hyperparamter and can be tuned accordingly.<\/li>\n<\/ol>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# add noise to the images for train and test cases\ndef distort_image(input_imgs, noise_factor=0.5):\n    noisy_imgs = input_imgs + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=input_imgs.shape) \n    noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n    return noisy_imgs\n\n# prepare distorted input data for training\ntrain_noisy_imgs = distort_image(train_imgs_data)\n\n# prepare distorted input data for evaluation\ntest_noisy_imgs = distort_image(test_imgs_data)\n<\/pre><\/div>\n\n\n<p>Let&#8217;s illustrate the noisy images :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# plot distorted image example from training images\nimage_id_to_plot = 0\nplt.imshow(tf.squeeze(train_noisy_imgs&#x5B;image_id_to_plot]), cmap='Greys')\nplt.title(\"The number is: {}\".format(train_labels&#x5B;image_id_to_plot]))\nplt.show()\n\n# plot distorted image example from test images\nplt.imshow(tf.squeeze(test_noisy_imgs&#x5B;image_id_to_plot]), cmap='Greys')\nplt.title(\"The number is: {}\".format(test_labels&#x5B;image_id_to_plot]))\nplt.show()\nplt.close()\n<\/pre><\/div>\n\n\n<p><strong>Output:<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure><img decoding=\"async\" width=\"246\" height=\"534\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2020\/06\/noisy_mnist.png\" alt=\"Noisy train and test images from the MNIST dataset\" class=\"wp-image-17735 aligncenter\"><figcaption class=\"aligncenter\">Figure 8: Noisy train and test MNIST images<\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">3.8 Train and evaluate the model<\/h3>\n\n\n\n<p>The network is ready to get trained. We specify the number of epochs as 25 with batch size of 64. This means that the whole dataset will be fed to the network 25 times. We will be using the test data for validation. <\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# define custom target function for further minimization\ndef cost_function(labels=None, logits=None, name=None):\n    loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits, name=name)\n    return tf.reduce_mean(loss)\n\n# init the model\nencoder_decoder_model = EncoderDecoderModel()\n\n# training loop params\nnum_epochs = 25\nbatch_size_to_set = 64\n\n# training process params\nlearning_rate = 1e-5\n# default number of workers for training process\nnum_workers = 2\n\n# initialize the training configurations such as optimizer, loss function and accuracy metrics\nencoder_decoder_model.compile(optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate),loss=cost_function,metrics=None)\n\nresults = encoder_decoder_model.fit(\n    train_noisy_imgs,\n    train_imgs_data,\n    epochs=num_epochs,\n    batch_size=batch_size_to_set,\n    validation_data=(test_noisy_imgs, test_imgs_data),\n    workers=num_workers,\n    shuffle=True\n)\n<\/pre><\/div>\n\n\n<p>After 25 epochs, we can see our training loss and validation loss is quite low, which means our network did a pretty good job. Let&#8217;s now see the loss plot between training and validation data using the introduced utility function <code>plot_losses(results)<\/code>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.10 Training Vs. Validation Loss Plot<\/h3>\n\n\n\n<p>We&#8217;ve defined the utility function for plotting the losses:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# funstion for train and val losses visualizations\ndef plot_losses(results):\n    plt.plot(results.history&#x5B;'loss'], 'bo', label='Training loss')\n    plt.plot(results.history&#x5B;'val_loss'], 'r', label='Validation loss')\n    plt.title('Training and validation loss',fontsize=14)\n    plt.xlabel('Epochs ',fontsize=14)\n    plt.ylabel('Loss',fontsize=14)\n    plt.legend()\n    plt.show()\n    plt.close()\n\n# visualize train and val losses\nplot_losses(results)\n<\/pre><\/div>\n\n\n<p>The result is:<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure><img decoding=\"async\" width=\"385\" height=\"280\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2020\/06\/train_val_loss.png\" alt=\"Training and validation loss plot after training the denoising autoencoder\" class=\"wp-image-17741 aligncenter\"><figcaption class=\"aligncenter\">Figure 9: Training and validation losses<\/figcaption><\/figure><\/div>\n\n\n\n<p>From the above loss plot, we can observe that the validation loss and training loss steadily decrease in the first ten epochs. This training loss and the validation loss are also very close to each other. This means that our model has generalized well to unseen test data.<\/p>\n\n\n\n<p>We can further validate our results by observing the original, noisy and reconstruction of test images.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.11 Results<\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure><img decoding=\"async\" width=\"814\" height=\"353\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2020\/06\/mnist_imgs.png\" alt=\"Original, noisy, and the autoencoder denoised images\" class=\"wp-image-17742 aligncenter\"><figcaption class=\"aligncenter\">Figure 10: Representation of MNIST images on different stages<\/figcaption><\/figure><\/div>\n\n\n\n<p>From the above figures, we can observe that our model did a good job in denoising the noisy images that we had fed into our model.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>In this article, we will learn about autoencoders in deep learning. We will show a practical implementation of using a Denoising Autoencoder on the MNIST handwritten digits dataset as an example. In addition, we are sharing an implementation of the idea in Tensorflow. 1. What is An Autoencoder? An autoencoder is an unsupervised machine-learning algorithm [&hellip;]<\/p>\n","protected":false},"author":13,"featured_media":6108,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[56,454,455,104],"tags":[245,2341,1941,2340,101,246,1942,220,211,2339,866],"coauthors":[445,528],"class_list":["post-5708","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-deep-learning","category-keras","category-tensorflow","category-tutorial","tag-autoencoder","tag-autoencoder-neural-network","tag-autoencoders-in-deep-learning","tag-autoencoders-python","tag-convolutional-neural-network","tag-denoising","tag-denoising-autoencoders-and-where-to-find-them","tag-mnist","tag-tensorflow","tag-unsupervised-learning","tag-variational-autoencoder"],"acf":[],"aioseo_notices":[],"featured_image_src":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","featured_image_src_square":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","author_info":{"display_name":"Aditya Sharma","author_link":"https:\/\/learnopencv.com\/author\/adityasharma\/"},"aioseo_head":"\n\t\t<!-- All in One SEO Pro 4.9.5.2 - aioseo.com -->\n\t<meta name=\"description\" content=\"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python\" \/>\n\t<meta name=\"robots\" content=\"max-image-preview:large\" \/>\n\t<meta name=\"author\" content=\"Aditya Sharma\"\/>\n\t<meta name=\"google-site-verification\" content=\"k5TzNXM2eW8MZp1slBPbu88XEZ7mPURnk897kaccqiI\" \/>\n\t<meta name=\"keywords\" content=\"autoencoder,autoencoder neural network,autoencoders in deep learning,autoencoders python,convolutional neural network,denoising,denoising autoencoders and where to find them,mnist,tensorflow,unsupervised learning,variational autoencoder\" \/>\n\t<link rel=\"canonical\" href=\"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/\" \/>\n\t<meta name=\"generator\" content=\"All in One SEO Pro (AIOSEO) 4.9.5.2\" \/>\n\t\t<meta property=\"og:locale\" content=\"en_US\" \/>\n\t\t<meta property=\"og:site_name\" content=\"LearnOpenCV \u2013 Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp; tutorials\" \/>\n\t\t<meta property=\"og:type\" content=\"article\" \/>\n\t\t<meta property=\"og:title\" content=\"Understanding Autoencoders With Tensorflow:Denoising Autoencoders\" \/>\n\t\t<meta property=\"og:description\" content=\"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python\" \/>\n\t\t<meta property=\"og:url\" content=\"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/\" \/>\n\t\t<meta property=\"og:image\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg\" \/>\n\t\t<meta property=\"og:image:secure_url\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg\" \/>\n\t\t<meta property=\"og:image:width\" content=\"600\" \/>\n\t\t<meta property=\"og:image:height\" content=\"299\" \/>\n\t\t<meta property=\"article:published_time\" content=\"2017-11-15T19:25:20+00:00\" \/>\n\t\t<meta property=\"article:modified_time\" content=\"2023-10-03T08:27:58+00:00\" \/>\n\t\t<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Learnopencv-277284889389059\" \/>\n\t\t<meta name=\"twitter:card\" content=\"summary\" \/>\n\t\t<meta name=\"twitter:site\" content=\"@AiOpencv\" \/>\n\t\t<meta name=\"twitter:title\" content=\"Understanding Autoencoders With Tensorflow:Denoising Autoencoders\" \/>\n\t\t<meta name=\"twitter:description\" content=\"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python\" \/>\n\t\t<meta name=\"twitter:creator\" content=\"@AiOpencv\" \/>\n\t\t<meta name=\"twitter:image\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg\" \/>\n\t\t<script type=\"application\/ld+json\" class=\"aioseo-schema\">\n\t\t\t{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#article\",\"name\":\"Understanding Autoencoders With Tensorflow:Denoising Autoencoders\",\"headline\":\"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)\",\"author\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/adityasharma\\\/#author\"},\"publisher\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/uploads\\\/2017\\\/11\\\/denoising-autoencoder.jpg\",\"width\":600,\"height\":299,\"caption\":\"Denoising Autoencoder Deep Learning\"},\"datePublished\":\"2017-11-15T11:25:20-08:00\",\"dateModified\":\"2023-10-03T01:27:58-07:00\",\"inLanguage\":\"en-US\",\"commentCount\":15,\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#webpage\"},\"isPartOf\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#webpage\"},\"articleSection\":\"Deep Learning, Keras, Tensorflow, Tutorial, autoencoder, Autoencoder neural network, autoencoders in deep learning, Autoencoders python, convolutional neural network, denoising, denoising autoencoders and where to find them, MNIST, tensorflow, unsupervised learning, variational autoencoder, adityasharma, anastasia.murzova\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#breadcrumblist\",\"itemListElement\":[{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com#listItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/learnopencv.com\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/deep-learning\\\/#listItem\",\"name\":\"Deep Learning\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/deep-learning\\\/#listItem\",\"position\":2,\"name\":\"Deep Learning\",\"item\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/deep-learning\\\/\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#listItem\",\"name\":\"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)\"},\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com#listItem\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#listItem\",\"position\":3,\"name\":\"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)\",\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/deep-learning\\\/#listItem\",\"name\":\"Deep Learning\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\",\"name\":\"Satya Mallick\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#personImage\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/litespeed\\\/avatar\\\/483395fab515fdb59dbd986fdcd73e10.jpg?ver=1776255091\",\"width\":96,\"height\":96,\"caption\":\"Satya Mallick\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/adityasharma\\\/#author\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/adityasharma\\\/\",\"name\":\"Aditya Sharma\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/litespeed\\\/avatar\\\/7b7add6d601e4f162a9d058da5514772.jpg?ver=1776255103\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#webpage\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/\",\"name\":\"Understanding Autoencoders With Tensorflow:Denoising Autoencoders\",\"description\":\"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python\",\"inLanguage\":\"en-US\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#website\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#breadcrumblist\"},\"author\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/adityasharma\\\/#author\"},\"creator\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/adityasharma\\\/#author\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/uploads\\\/2017\\\/11\\\/denoising-autoencoder.jpg\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#mainImage\",\"width\":600,\"height\":299,\"caption\":\"Denoising Autoencoder Deep Learning\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/understanding-autoencoders-using-tensorflow-python\\\/#mainImage\"},\"datePublished\":\"2017-11-15T11:25:20-08:00\",\"dateModified\":\"2023-10-03T01:27:58-07:00\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#website\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/\",\"name\":\"LearnOpenCV\",\"description\":\"Learn OpenCV, PyTorch, Keras, Tensorflow with code, & tutorials\",\"inLanguage\":\"en-US\",\"publisher\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\"}}]}\n\t\t<\/script>\n\t\t<!-- All in One SEO Pro -->\r\n\t\t<title>Understanding Autoencoders With Tensorflow:Denoising Autoencoders<\/title>\n\n","aioseo_head_json":{"title":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","description":"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python","canonical_url":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/","robots":"max-image-preview:large","keywords":"autoencoder,autoencoder neural network,autoencoders in deep learning,autoencoders python,convolutional neural network,denoising,denoising autoencoders and where to find them,mnist,tensorflow,unsupervised learning,variational autoencoder","webmasterTools":{"google-site-verification":"k5TzNXM2eW8MZp1slBPbu88XEZ7mPURnk897kaccqiI","miscellaneous":""},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#article","name":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","headline":"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)","author":{"@id":"https:\/\/learnopencv.com\/author\/adityasharma\/#author"},"publisher":{"@id":"https:\/\/learnopencv.com\/#person"},"image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","width":600,"height":299,"caption":"Denoising Autoencoder Deep Learning"},"datePublished":"2017-11-15T11:25:20-08:00","dateModified":"2023-10-03T01:27:58-07:00","inLanguage":"en-US","commentCount":15,"mainEntityOfPage":{"@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#webpage"},"isPartOf":{"@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#webpage"},"articleSection":"Deep Learning, Keras, Tensorflow, Tutorial, autoencoder, Autoencoder neural network, autoencoders in deep learning, Autoencoders python, convolutional neural network, denoising, denoising autoencoders and where to find them, MNIST, tensorflow, unsupervised learning, variational autoencoder, adityasharma, anastasia.murzova"},{"@type":"BreadcrumbList","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#breadcrumblist","itemListElement":[{"@type":"ListItem","@id":"https:\/\/learnopencv.com#listItem","position":1,"name":"Home","item":"https:\/\/learnopencv.com","nextItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/deep-learning\/#listItem","name":"Deep Learning"}},{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/deep-learning\/#listItem","position":2,"name":"Deep Learning","item":"https:\/\/learnopencv.com\/category\/deep-learning\/","nextItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#listItem","name":"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)"},"previousItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com#listItem","name":"Home"}},{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#listItem","position":3,"name":"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)","previousItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/deep-learning\/#listItem","name":"Deep Learning"}}]},{"@type":"Person","@id":"https:\/\/learnopencv.com\/#person","name":"Satya Mallick","image":{"@type":"ImageObject","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#personImage","url":"https:\/\/learnopencv.com\/wp-content\/litespeed\/avatar\/483395fab515fdb59dbd986fdcd73e10.jpg?ver=1776255091","width":96,"height":96,"caption":"Satya Mallick"}},{"@type":"Person","@id":"https:\/\/learnopencv.com\/author\/adityasharma\/#author","url":"https:\/\/learnopencv.com\/author\/adityasharma\/","name":"Aditya Sharma","image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/litespeed\/avatar\/7b7add6d601e4f162a9d058da5514772.jpg?ver=1776255103"}},{"@type":"WebPage","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#webpage","url":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/","name":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","description":"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python","inLanguage":"en-US","isPartOf":{"@id":"https:\/\/learnopencv.com\/#website"},"breadcrumb":{"@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#breadcrumblist"},"author":{"@id":"https:\/\/learnopencv.com\/author\/adityasharma\/#author"},"creator":{"@id":"https:\/\/learnopencv.com\/author\/adityasharma\/#author"},"image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#mainImage","width":600,"height":299,"caption":"Denoising Autoencoder Deep Learning"},"primaryImageOfPage":{"@id":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/#mainImage"},"datePublished":"2017-11-15T11:25:20-08:00","dateModified":"2023-10-03T01:27:58-07:00"},{"@type":"WebSite","@id":"https:\/\/learnopencv.com\/#website","url":"https:\/\/learnopencv.com\/","name":"LearnOpenCV","description":"Learn OpenCV, PyTorch, Keras, Tensorflow with code, & tutorials","inLanguage":"en-US","publisher":{"@id":"https:\/\/learnopencv.com\/#person"}}]},"og:locale":"en_US","og:site_name":"LearnOpenCV \u2013 Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp; tutorials","og:type":"article","og:title":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","og:description":"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python","og:url":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/","og:image":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","og:image:secure_url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg","og:image:width":600,"og:image:height":299,"article:published_time":"2017-11-15T19:25:20+00:00","article:modified_time":"2023-10-03T08:27:58+00:00","article:publisher":"https:\/\/www.facebook.com\/Learnopencv-277284889389059","twitter:card":"summary","twitter:site":"@AiOpencv","twitter:title":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","twitter:description":"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python","twitter:creator":"@AiOpencv","twitter:image":"https:\/\/learnopencv.com\/wp-content\/uploads\/2017\/11\/denoising-autoencoder.jpg"},"aioseo_meta_data":{"post_id":"5708","title":"Understanding Autoencoders With Tensorflow:Denoising Autoencoders","description":"Get to know autoencoders in Deep Learning! In this post, we make it easy to understand how to set up denoising autoencoders with Tensorflow-Python","keywords":[],"keyphrases":{"focus":{"keyphrase":"Autoencoders","score":69,"analysis":{"keyphraseInTitle":{"score":9,"maxScore":9,"error":0},"keyphraseInDescription":{"score":9,"maxScore":9,"error":0},"keyphraseLength":{"score":9,"maxScore":9,"error":0,"length":1},"keyphraseInURL":{"score":5,"maxScore":5,"error":0},"keyphraseInIntroduction":{"score":9,"maxScore":9,"error":0},"keyphraseInSubHeadings":{"score":3,"maxScore":9,"error":1},"keyphraseInImageAlt":{"score":3,"maxScore":9,"error":1},"keywordDensity":{"score":0,"type":"low","maxScore":9,"error":1}}},"additional":[]},"primary_term":null,"canonical_url":null,"og_title":null,"og_description":null,"og_object_type":"default","og_image_type":"default","og_image_url":null,"og_image_width":null,"og_image_height":null,"og_image_custom_url":null,"og_image_custom_fields":null,"og_video":"","og_custom_url":null,"og_article_section":null,"og_article_tags":[],"twitter_use_og":false,"twitter_card":"default","twitter_image_type":"default","twitter_image_url":null,"twitter_image_custom_url":null,"twitter_image_custom_fields":null,"twitter_title":null,"twitter_description":null,"schema":{"blockGraphs":[],"customGraphs":[],"default":{"data":{"Article":[],"Course":[],"Dataset":[],"FAQPage":[],"Movie":[],"Person":[],"Product":[],"ProductReview":[],"Car":[],"Recipe":[],"Service":[],"SoftwareApplication":[],"WebPage":[]},"graphName":"Article","isEnabled":true},"graphs":[],"defaultGraph":"Article","defaultPostTypeGraph":""},"schema_type":"default","schema_type_options":"{\"article\":{\"articleType\":\"BlogPosting\"},\"course\":{\"name\":\"\",\"description\":\"\",\"provider\":\"\"},\"faq\":{\"pages\":[]},\"product\":{\"reviews\":[]},\"recipe\":{\"ingredients\":[],\"instructions\":[],\"keywords\":[]},\"software\":{\"reviews\":[],\"operatingSystems\":[]},\"webPage\":{\"webPageType\":\"WebPage\"}}","pillar_content":false,"robots_default":true,"robots_noindex":false,"robots_noarchive":false,"robots_nosnippet":false,"robots_nofollow":false,"robots_noimageindex":false,"robots_noodp":false,"robots_notranslate":false,"robots_max_snippet":"-1","robots_max_videopreview":"-1","robots_max_imagepreview":"large","priority":null,"frequency":"default","location":null,"local_seo":null,"breadcrumb_settings":null,"limit_modified_date":false,"reviewed_by":null,"open_ai":"{\"title\":{\"suggestions\":[],\"usage\":0},\"description\":{\"suggestions\":[],\"usage\":0}}","ai":null,"created":"2020-12-22 22:08:27","updated":"2025-05-31 03:15:52"},"aioseo_breadcrumb":"<div class=\"aioseo-breadcrumbs\"><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/learnopencv.com\" title=\"Home\">Home<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">\u00bb<\/span><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/learnopencv.com\/category\/deep-learning\/\" title=\"Deep Learning\">Deep Learning<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">\u00bb<\/span><span class=\"aioseo-breadcrumb\">\n\tAutoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)\n<\/span><\/div>","aioseo_breadcrumb_json":[{"label":"Home","link":"https:\/\/learnopencv.com"},{"label":"Deep Learning","link":"https:\/\/learnopencv.com\/category\/deep-learning\/"},{"label":"Autoencoders Explored: Understanding and Implementing Denoising Autoencoders with Tensorflow (Python)","link":"https:\/\/learnopencv.com\/understanding-autoencoders-using-tensorflow-python\/"}],"_links":{"self":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts\/5708","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/comments?post=5708"}],"version-history":[{"count":0,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts\/5708\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/media\/6108"}],"wp:attachment":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/media?parent=5708"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/categories?post=5708"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/tags?post=5708"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/coauthors?post=5708"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}