<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-2807468768188634584</id><updated>2026-04-13T12:24:35.999+05:30</updated><category term="Linux"/><category term="Hadoop"/><category term="Solaris"/><category term="Containers"/><category term="LXD"/><category term="Storage"/><category term="Windows"/><title type='text'># HashPrompt</title><subtitle type='html'>Gaining knowledge, is the first step to wisdom. Sharing it, is the first step to humanity.</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default?start-index=26&amp;max-results=25&amp;redirect=false'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>83</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-3347058294108464786</id><published>2017-06-18T17:17:00.001+05:30</published><updated>2017-06-18T17:17:18.736+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Installation of R on SuSE Linux</title><summary type="text">
We are going to install R software package on Linux and for that we are going to use SLES11 SP3 and R 3.3.3.
A fresh install of SLES will not have any development packages and hence, it is assumed that the SDK repo has been enabled to resolve the dependencies. Java should be installed as a pre-requisite dependency package and it&#39;s installation is not covered in this tutorial. However, you can </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/3347058294108464786/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2017/06/installation-of-r-on-suse-linux.html#comment-form' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3347058294108464786'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3347058294108464786'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2017/06/installation-of-r-on-suse-linux.html' title='Installation of R on SuSE Linux'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhP2cxjmQLgcID4QXDssARE4BINMG05X1bd6bQZ-cc63Ju286eYIu0h9TtSfReSIMvCEg1_RmQifXxDZEmhvo_ORJqGz9by7n8W_165-bYf_LSCF3QZIf7oRG9xcQreyxymrNPrezJKRns/s72-c/sles_repo.JPG" height="72" width="72"/><thr:total>6</thr:total><georss:featurename>Pune, Maharashtra, India</georss:featurename><georss:point>18.5204303 73.856743699999925</georss:point><georss:box>18.2795358 73.534020199999929 18.7613248 74.17946719999992</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-1307618886206166956</id><published>2017-06-17T22:33:00.000+05:30</published><updated>2017-06-23T09:27:23.024+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Cloudera Security - Kerberos Installation &amp; Configuration</title><summary type="text">
In my previous post I have demonstrated the installation of multi-node Cloudera cluster. Here I will demonstrate how to kerberize a Cloudera cluster.


Introduction to Kerberos

Kerberos is a network authentication protocol that allows both users and machines to identify themselves on a network, defining and limiting access to services that are configured by the administrator. Kerberos uses </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/1307618886206166956/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2017/06/cloudera-security-kerberos-installation.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1307618886206166956'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1307618886206166956'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2017/06/cloudera-security-kerberos-installation.html' title='Cloudera Security - Kerberos Installation &amp; Configuration'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsOfyjC_2b0BcyBTblqzywarUhZm8m3vvEYCh743bjPh3EPi4vWHqMsJIbcPcp1XEgyICOzako8kJyN_JT2q9ShmU6pfXS2TsmTM0fSPCz353TtiGVK3I-VXFc_yi_VxUqujMUdGUAfBU/s72-c/Kerberos_Testing1.JPG" height="72" width="72"/><thr:total>3</thr:total><georss:featurename>Pune, Maharashtra, India</georss:featurename><georss:point>18.5204303 73.856743699999925</georss:point><georss:box>18.2795358 73.534020199999929 18.7613248 74.17946719999992</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-3553569842564430043</id><published>2017-05-01T10:31:00.001+05:30</published><updated>2017-06-18T07:57:50.084+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Cloudera Multi-Node Cluster Installation</title><summary type="text">
Here we are going to setup a multi-node fully distributed Cloudera Hadoop cluster configured with &quot;MySQL&quot; as external database. We will also configure our cluster to authenticate using Kerberos and authorize using OpenLDAP as additional security implementations.





ENVIRONMENT SETUP &amp;amp; CONFIGURATION

Operating System: CentOS-6.8

Cloudera Manager Version: 5.9.1

CDH Version: 5.9.1

We will </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/3553569842564430043/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2017/05/cloudera-multi-node-cluster-installation.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3553569842564430043'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3553569842564430043'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2017/05/cloudera-multi-node-cluster-installation.html' title='Cloudera Multi-Node Cluster Installation'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikbi2PQSj6iZim8HxUessICFBicRQqQkGx2t8B0qVgeIdIc4M4ilOix3uBQgARSUi_YVqymy2Ktsq4Cs1B5QntRAKKURk_ZlgmRQX9IcelWUI8uxo1COeA-mp4Q1Z3gSnG45yaml8RWCY/s72-c/Mount+DVD.jpg" height="72" width="72"/><thr:total>2</thr:total><georss:featurename>Pune, Maharashtra, India</georss:featurename><georss:point>18.5204303 73.856743699999925</georss:point><georss:box>18.2795358 73.534020199999929 18.7613248 74.17946719999992</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-2941298903713132687</id><published>2017-01-05T10:07:00.001+05:30</published><updated>2017-01-05T10:10:13.513+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><title type='text'>Introduction to CyberSecurity Warheads</title><summary type="text">







&quot;Ransomeware is more about manipulating vulnerabilities in human psychology than the adversary&#39;s technological sophistication.&quot;

--- James Scott, Sr. Fellow, Institute for Critical Infrastructure Technology

According to Yahoo&#39;s latest revelation, two years back half a billion Yahoo user accounts&#39; security was compromised. Twitter experienced an outage due to massive DDoS attack on Dyn. </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/2941298903713132687/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2017/01/introduction-to-cybersecurity-warheads.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2941298903713132687'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2941298903713132687'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2017/01/introduction-to-cybersecurity-warheads.html' title='Introduction to CyberSecurity Warheads'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZOMbdIRreHCiE12KcZQ2m_GhIt75HDuSvWnGAQGknWj_3Z8xhK4v2t4G9poTU8SaJl5tQs_pB2INSzyNRkWMLiKHsrSbJxagpDQngq1sYtCPfZqTzlBI653ShgTqHBQOx6d3Wv5NV7ME/s72-c/cyber-security-warheads.jpg" height="72" width="72"/><thr:total>0</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-4345147737209608550</id><published>2016-07-24T22:39:00.000+05:30</published><updated>2017-04-24T06:11:53.859+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Containers"/><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><category scheme="http://www.blogger.com/atom/ns#" term="LXD"/><title type='text'>Fully Distributed Hadoop Cluster with Automatic Failover Namenode HA Using QJM &amp; ZooKeeper on LXD Containers</title><summary type="text">



Various approaches have been taken to meet the increased demand for highly reliable infrastructures, providing &quot;five 9s&quot; standard of availability. The concept of high-availability has long been providing highly reliable solution to serve critical systems with single point of failures and handle their increased system load in the most efficient ways.

NameNode was a single point of failure in </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/4345147737209608550/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2016/07/fully-distributed-hadoop-cluster-with.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/4345147737209608550'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/4345147737209608550'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2016/07/fully-distributed-hadoop-cluster-with.html' title='Fully Distributed Hadoop Cluster with Automatic Failover Namenode HA Using QJM &amp; ZooKeeper on LXD Containers'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLmodZHU8gal4MPxyQu1EQ5peAS_4DIVyW1YgryaFsxA6onkiYyeSQqccl_AJgJ4d4mXVufgPJwegNuRcr9O8QQXnNAjBJVaF8g6lIqSH1saAVKB49r_EowDSxLXSoZkcdaYiWUwzQQ1E/s72-c/elephant-enter-container.jpg" height="72" width="72"/><thr:total>1</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-2708339278644961896</id><published>2016-05-22T09:48:00.000+05:30</published><updated>2016-05-24T04:16:33.638+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Containers"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><category scheme="http://www.blogger.com/atom/ns#" term="LXD"/><title type='text'>Practice Guide for LXD - Canonical&#39;s OpenSource Container HyperVisor [Part-II]</title><summary type="text">


LXD container on a single host is just like &quot;chroot on steroids&quot;. LXD&#39;s main goal is to provide an experience that is similar to virtual machines and hypervisors excluding the hardware virtualization technique. My previous port &quot;Layman&#39;s Guide for LXD - Canonicals OpenSource Container Hypervisor [Part-I]&quot; provides an introduction to containers and LXD.

This article will be based mostly on </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/2708339278644961896/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2016/05/practice-guide-for-lxd-canonicals.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2708339278644961896'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2708339278644961896'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2016/05/practice-guide-for-lxd-canonicals.html' title='Practice Guide for LXD - Canonical&#39;s OpenSource Container HyperVisor [Part-II]'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiulYTCVbJRXfHCan4nyPQixEDhQoMGIZ5yBWudeNq1sQl16sqwT4Y3h9WDP8nGN51DuQ09S6BJilNXsLd4YP1IhkPG6wqc9u1HYqbLjTfR1ekQVILDZwgEJDunNupfo6CXUSIfSGQel6k/s72-c/luxx.jpg" height="72" width="72"/><thr:total>3</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-4614264765550367489</id><published>2016-05-07T09:10:00.000+05:30</published><updated>2016-05-07T09:33:12.565+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Containers"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><category scheme="http://www.blogger.com/atom/ns#" term="LXD"/><title type='text'>Layman&#39;s Guide for LXD - Canonical&#39;s OpenSource Container HyperVisor [Part-I]</title><summary type="text">




Introduction
Certainly, &quot;container&quot; is the new buzz word among techies. Everyone is talking about Docker, LXC and LXD. The continous need to reduce costs, optimize performance, as well as maintain the data availability combined with data integrity has been the most prominent need for most of the organizations leading to convergence of various virtualization concepts to develop an efficient </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/4614264765550367489/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2016/05/laymans-guide-for-lxd-canonicals.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/4614264765550367489'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/4614264765550367489'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2016/05/laymans-guide-for-lxd-canonicals.html' title='Layman&#39;s Guide for LXD - Canonical&#39;s OpenSource Container HyperVisor [Part-I]'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVaYmpYXrQHqqDwPU2xl0dZiMuET5PPQPeyHgjf0FXR87tB2eXtQRHzKGMpBDHvk_vRuiOGSkZ64jArYI8-nyC-f8HJdvRqFPaH-xkBVVC1mUm69r4eoPh1iHkl1Mgnadgn-0UTv_L2Ak/s72-c/lxd.png" height="72" width="72"/><thr:total>2</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-1271184342481087388</id><published>2016-04-22T01:00:00.000+05:30</published><updated>2016-04-22T03:25:25.332+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><title type='text'>Life Cycle of MapReduce Job</title><summary type="text">
Here, I will explain behind the scenes of job execution process in Hadoop MapReduce or MRv1 (MapReduce version 1), from the time user fires a job to the time when the job is executed on the slave nodes.

MapReduce is a &quot;programming model/software framework&quot; designed to process large amount of data in parallel by dividing the job into a number of independent data local tasks. The term data </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/1271184342481087388/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2016/04/life-cycle-of-mapreduce-job.html#comment-form' title='71 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1271184342481087388'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1271184342481087388'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2016/04/life-cycle-of-mapreduce-job.html' title='Life Cycle of MapReduce Job'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>71</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-478910353216692557</id><published>2015-07-13T00:00:00.002+05:30</published><updated>2015-07-13T00:00:52.127+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Globus Toolkit Installation on CentOS</title><summary type="text">

This is a quickstart guide to install Globus
Toolkit 6.0 using yum on CentOS-6.6.
The steps mentioned in “GT
6 Quickstart Guide” which is the official
documentation of Globus Toolkit were followed during the
installation process.The GT 6.0 release provides both source and
binary RPM packages for CentOS which can be downloaded from here.

We have two two servers out of which one will
act as a </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/478910353216692557/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/07/globus-toolkit-installation-on-centos.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/478910353216692557'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/478910353216692557'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/07/globus-toolkit-installation-on-centos.html' title='Globus Toolkit Installation on CentOS'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-7521157550123729614</id><published>2015-01-12T01:25:00.001+05:30</published><updated>2015-01-14T17:18:16.637+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Fully Distributed Hadoop Cluster - Automatic Failover HA Cluster with Zookeeper &amp; QJM</title><summary type="text">
After configuring an automatic failover HA with ZooKeeper and NFS, we will now configure an automatic failover HA with ZooKeeper and QJM.

We will use the Quorum Journal Manager to share edit logs between active and standby namenodes. Any namespace modification done by active namenode is recorded by the journal nodes. These journal node daemons can be run alongside any other daemons like </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/7521157550123729614/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster.html#comment-form' title='9 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7521157550123729614'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7521157550123729614'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster.html' title='Fully Distributed Hadoop Cluster - Automatic Failover HA Cluster with Zookeeper &amp; QJM'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>9</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-2330739302964857690</id><published>2015-01-11T20:27:00.001+05:30</published><updated>2016-07-14T08:20:56.123+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>ZooKeeper Installation &amp; Configuration</title><summary type="text">
In my previous post we had configured a manual failover hadoop cluster. Now we being clear with the hadoop high availability features, we will slowly proceed towards configuring automatic failover cluster. Prior to that we need &#39;zookeeper&#39;.



What is ZooKeeper?
An excerpt from Apache ZooKeeper website -

ZooKeeper is a centralized service for maintaining configuration information, naming, </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/2330739302964857690/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/zookeeper-installation-configuration_11.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2330739302964857690'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/2330739302964857690'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/zookeeper-installation-configuration_11.html' title='ZooKeeper Installation &amp; Configuration'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-7882252132350461489</id><published>2015-01-11T20:00:00.001+05:30</published><updated>2015-01-12T01:29:28.154+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Fully Distributed Hadoop Cluster - Automatic Failover HA Cluster with Zookeeper &amp; NFS</title><summary type="text">

Since manual failover mechanism was unable to automatically trigger a failover in cases of namenode failure, automatic failover mechanism made sure to provide a hot backup during a failover. This was overcome by zookeeper. We have already covered zookeeper installation &amp;amp; configuration in my previous post.
To configure an automatic failover ha cluster we need more than one odd number of </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/7882252132350461489/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster_11.html#comment-form' title='16 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7882252132350461489'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7882252132350461489'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster_11.html' title='Fully Distributed Hadoop Cluster - Automatic Failover HA Cluster with Zookeeper &amp; NFS'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>16</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-6482166611719989550</id><published>2015-01-11T19:45:00.002+05:30</published><updated>2015-01-11T20:37:59.653+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Fully Distributed Hadoop Cluster - Manual Failover HA with NFS</title><summary type="text">


In
my last post we had configured Hadoop Federation Cluster in a fully
distributed mode. Next we will go for a fully distributed manual failover
hadoop HA cluster in this post. I will skip the hadoop and java
installation part as we have already gone through those a couple of
times in my previous posts.  For further learning we will use the hardware configuration mentioned in the below table.
</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/6482166611719989550/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster-manual.html#comment-form' title='10 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6482166611719989550'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6482166611719989550'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-cluster-manual.html' title='Fully Distributed Hadoop Cluster - Manual Failover HA with NFS'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>10</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-7519605276887227459</id><published>2015-01-11T19:40:00.000+05:30</published><updated>2015-01-11T19:40:52.081+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Manual Installation of Oracle Java 8 (JDK 8u25) in RHEL/CentOS 5 &amp; Ubuntu 14.04</title><summary type="text">

This post will assist you in installing Oracle Java 8 in RHEL 5, CentOS 5 and Ubuntu 14.04.




Java Archive Download

Download the latest Java SE Development Kit 8 from it&#39;s official download page.



Java Installation Using Alternatives

RHEL/CentOS

Extract the tarball and we will install it in /opt directory.



root:~# cd /opt

root:~# tar -xzvf&amp;nbsp;jdk-8u25-linux-x64.tar.gz

root:~# </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/7519605276887227459/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/manual-installation-of-oracle-java-8.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7519605276887227459'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7519605276887227459'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/manual-installation-of-oracle-java-8.html' title='Manual Installation of Oracle Java 8 (JDK 8u25) in RHEL/CentOS 5 &amp; Ubuntu 14.04'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9ZH-5gotmGrZpmco4nWBTF1aTCgRm2eKdPv_VkCtPiSdZ_DRwySA0UlmceeywZ_cKGQlmucx5igW_nC0mNFdThFUmxyL2zDGKgTCprcxwmTpY3XBBf7aPSh8k1e_aWjjA4kXq-CQfyCI/s72-c/Screenshot+from+2015-01-11+15:43:25.png" height="72" width="72"/><thr:total>0</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-7883471187426123813</id><published>2015-01-04T00:00:00.000+05:30</published><updated>2015-01-11T20:42:53.994+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Fully Distributed Hadoop Federation Cluster</title><summary type="text">




Federation
Concepts


“HDFS
Federation improves the existing HDFS architecture through a clear
separation of namespace and storage, enabling generic block storage
layer. It enables support for multiple namespaces in the cluster to
improve scalability and isolation. Federation also opens up the
architecture, expanding the applicability of HDFS cluster to new
implementations and use cases.


I</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/7883471187426123813/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-federation.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7883471187426123813'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/7883471187426123813'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2015/01/fully-distributed-hadoop-federation.html' title='Fully Distributed Hadoop Federation Cluster'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total><georss:featurename>Hyderabad, Telangana, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-517597553553309304</id><published>2014-06-05T19:30:00.000+05:30</published><updated>2015-01-22T21:36:47.889+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Multi-Node Hadoop Cluster On Ubuntu 14.04</title><summary type="text">




In this tutorial I will
describe the steps for setting up a multi-node i.e. five nodes Hadoop cluster running
on Ubuntu. I have created this setup inside a VMware ESXi box. There are five
virtual machines all having the same common configurations like 20GB
HDD, 512MB RAM, etc. For this I will be using Ubuntu 14.04 LTS as
operating system and Hadoop 1.2.1. All the machines will be
</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/517597553553309304/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2014/06/multi-node-hadoop-cluster-on-ubuntu-1404.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/517597553553309304'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/517597553553309304'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2014/06/multi-node-hadoop-cluster-on-ubuntu-1404.html' title='Multi-Node Hadoop Cluster On Ubuntu 14.04'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8BR4QBIzcmAjh7A6TnAUVfruTcHsuru-hlAFCka9Q_xCfwkukFCpZsdGsrIIeYlrsx6YWZ1vPOJa7cJhqQ7NyFyTrq9c5_fpt3DzxDCisPzHOAoVmV6L3fGUzYAHIcPKH61prUqVjCrg/s72-c/Screenshot.png" height="72" width="72"/><thr:total>3</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-3888964351789429042</id><published>2014-06-05T19:00:00.000+05:30</published><updated>2014-06-06T01:27:36.965+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Single-Node Hadoop Cluster on Ubuntu 14.04</title><summary type="text">

In
this tutorial I will demonstrate how to install and run Single-Node
Hadoop Cluster in Ubuntu 14.04.



JAVA
INSTALLATION

As a
requirement java needs to be installed.

user@hadoop-lab:~$
sudo apt-get install openjdk-7-jdk



HADOOP
USER &amp;amp; GROUP CREATION

Create
a dedicated user account and group for hadoop. 


user@hadoop-lab:~$
sudo groupadd hadoop

user@hadoop-lab:~$
sudo useradd -m -d</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/3888964351789429042/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2014/06/single-node-hadoop-cluster-on-ubuntu.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3888964351789429042'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3888964351789429042'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2014/06/single-node-hadoop-cluster-on-ubuntu.html' title='Single-Node Hadoop Cluster on Ubuntu 14.04'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsNHtQJdlOFjP1CLl4RvKRI3QP7CTqrSy656yqJjyybeJefhpVZbfaoErQc0AyBRPv9vovboeK9yMhRPMwuInltREpYLlGxJkgOjG7-Y0Z766H950-OyD7aaQ7qT8UU2aE69KHmEC-9vU/s72-c/Screenshot.png" height="72" width="72"/><thr:total>3</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-86611403262082436</id><published>2014-05-28T19:13:00.000+05:30</published><updated>2014-06-06T08:53:51.671+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Hadoop"/><category scheme="http://www.blogger.com/atom/ns#" term="Solaris"/><title type='text'>Multi-Node Hadoop Cluster on Oracle Solaris 11 using Zones</title><summary type="text">

This tutorial
demonstrates how to setup an Apache Hadoop 1.2.1 Cluster using Oracle
Solaris 11.1 Virtualization Technology or Zones.

I am running this setup
inside Oracle VM VirtualBox 4.3.10 on Ubuntu 12.04 and the guest
machine is running Oracle Solaris 11. Namenode will run inside Global
Zone whereas we will be configuring 4 guest zones with almost same
configuration for separate “Secondary</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/86611403262082436/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2014/05/multi-node-hadoop-cluster-on-oracle.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/86611403262082436'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/86611403262082436'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2014/05/multi-node-hadoop-cluster-on-oracle.html' title='Multi-Node Hadoop Cluster on Oracle Solaris 11 using Zones'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhf59Kl5zpE3hI-43QsIse0PrctK3NuIs0veKmrHhgvTc1w-jlphyphenhyphenwvb86bxQaYJiDJvO76PdN_dbuHa5sGcB0kFW5jEYvZIrVJqaUROY53WV8BwnujSFXUw73rZofkdXHs47mIVHKsjiE/s72-c/Screenshot-5.png" height="72" width="72"/><thr:total>2</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002155 77.841224 17.8698725 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-6542367460070591090</id><published>2014-05-28T19:08:00.002+05:30</published><updated>2014-05-28T19:52:42.725+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Solaris"/><title type='text'>Creating Local IPS Repository in Oracle Solaris 11.1</title><summary type="text">



A
repository is a location where the clients publish and retrieve
packages. There are two ways to obtain copy of Oracle Solaris 11.1
IPS (Image Packaging System) repository image, one method is to
download the repository image from the Oracle
Solaris 11 Website, create a local repository and the second
method is to retrieve the repository directly from the internet. Here
we dont have internet</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/6542367460070591090/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2014/05/creating-local-ips-repository-in-oracle.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6542367460070591090'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6542367460070591090'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2014/05/creating-local-ips-repository-in-oracle.html' title='Creating Local IPS Repository in Oracle Solaris 11.1'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPtOBBYKOe3Gtt1LN128ENzqIWFVDvTmpPSgkau7v0ADdSXjJXu7ADd11rkkL-QBIgTEJKIYkeAq2pVzI6EGEIYgK3C2mXQjbnBd4EXjPQr5qxGlCt9XuwZxBjZQ48pczsxwiw95dms5I/s72-c/Screenshot-1.png" height="72" width="72"/><thr:total>1</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>17.385044 78.486671 17.385044 78.486671</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-6224748229683617612</id><published>2013-04-07T13:38:00.000+05:30</published><updated>2013-04-07T13:38:07.109+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Upgrading Linux Kernel 3.8.6 (Stable) release in Ubuntu 12.10</title><summary type="text">


 
 
 



 
 
 




Linux
kernel 3.8.6 has recently released on April 05, 2013 with lots of
bugfixes and improvements.




The&amp;nbsp;changelog&amp;nbsp;contains
all the improvements and bugfixes made in the kernel.




Below
mentioned are some important fixes and improvements.




 
 
 




 
*
 ipv6: fix bad free of addrconf_init_net

*
 net: ethernet: cpsw: fix erroneous condition in error check

</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/6224748229683617612/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-386-stable.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6224748229683617612'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/6224748229683617612'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-386-stable.html' title='Upgrading Linux Kernel 3.8.6 (Stable) release in Ubuntu 12.10'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002075 77.841224 17.8698805 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-8058445017127822596</id><published>2013-04-01T13:23:00.002+05:30</published><updated>2013-04-01T13:23:49.550+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Upgrading Linux Kernel 3.8.5 (Stable) release in Ubuntu 12.10</title><summary type="text">


Linux
kernel 3.8.5 has recently released on March 28, 2013 with lots of
bugfixes and improvements.



The&amp;nbsp;changelog&amp;nbsp;contains
all the improvements and bugfixes made in the kernel.



Below
mentioned are some important fixes and improvements.



* drm/radeon: fix
backend map setup on 1 RB trinity boards

 
* drm/radeon:
 fix S/R on VM systems (cayman/TN/SI)

* ARM: tegra:
 fix register</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/8058445017127822596/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-385-stable.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/8058445017127822596'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/8058445017127822596'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-385-stable.html' title='Upgrading Linux Kernel 3.8.5 (Stable) release in Ubuntu 12.10'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>17.385044 78.486671 17.385044 78.486671</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-3117631900848704623</id><published>2013-04-01T13:23:00.001+05:30</published><updated>2013-04-01T13:23:33.368+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Upgrading Linux Kernel 3.8.4 (Stable) release in Ubuntu 12.10</title><summary type="text">


Linux
kernel 3.8.4 has recently released on March 20, 2013 with lots of
bugfixes and improvements.



The&amp;nbsp;changelog&amp;nbsp;contains
all the improvements and bugfixes made in the kernel.



Below
mentioned are some important fixes and improvements.



* ALSA: seq: Fix
missing error handling in snd_seq_timer_open()

* tty: serial:
 fix typo &quot;SERIAL_S3C2412&quot;

* ext3: Fix
 format string issues
</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/3117631900848704623/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-384-stable.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3117631900848704623'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/3117631900848704623'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/04/upgrading-linux-kernel-384-stable.html' title='Upgrading Linux Kernel 3.8.4 (Stable) release in Ubuntu 12.10'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>17.385044 78.486671 17.385044 78.486671</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-1621808325905453154</id><published>2013-04-01T13:23:00.000+05:30</published><updated>2013-04-11T16:56:15.194+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Installing BFQ I/O Scheduler using Liquorix Kernel</title><summary type="text">


The
operating system is said to be stable when both the software and
hardware of a machine or a system go parallel hand in hand. Your
favorite operating system or the best hardware that you opt for alone
cannot outperform to provide the best performance. Hence, if we
choose for the best hardware we have to tune the operating system
that we install to run efficiently on that hardware.



</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/1621808325905453154/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/04/installing-bfq-io-scheduler-using.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1621808325905453154'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1621808325905453154'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/04/installing-bfq-io-scheduler-using.html' title='Installing BFQ I/O Scheduler using Liquorix Kernel'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOT_QCNfTdrmv97nTOGqHNMAQDYLVO21dE6NwyF_3g7veqB4qtjWotrhmsFMQ0Ll7WlNUID2gpbEc-KJSnkSk5c7AWBE1-uzzqmE57KrA5Xdx3h6ObhG_zgfIq21WNDKO7q2Q2Pzx4C_4/s72-c/gcc_repo.png" height="72" width="72"/><thr:total>0</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002075 77.841224 17.8698805 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-8718552194110777494</id><published>2013-03-12T08:07:00.000+05:30</published><updated>2013-04-11T16:35:06.369+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Ubuntu Performance Tuning</title><summary type="text">


There are several articles around the
web to improve the performance of Ubuntu Linux. I have jotted down
some common tweaks that I too have tried on my system and due to
which I have found quite a significant change in the performance of
my system. It has been possible only because Linux kernel and&amp;nbsp;Ubuntu&amp;nbsp;are flexible enough to let you make the modifications as you want on
the fly. I</summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/8718552194110777494/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/03/ubuntu-performance-tuning.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/8718552194110777494'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/8718552194110777494'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/03/ubuntu-performance-tuning.html' title='Ubuntu Performance Tuning'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>16.9002075 77.841224 17.8698805 79.132118</georss:box></entry><entry><id>tag:blogger.com,1999:blog-2807468768188634584.post-1088618283911378143</id><published>2013-03-01T01:00:00.000+05:30</published><updated>2013-04-01T13:21:23.075+05:30</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><title type='text'>Upgrading Linux Kernel 3.7.6 (Stable) release in Ubuntu 12.10</title><summary type="text">



 
Linux
 kernel 3.7.6 has recently released on February 04, 2013 with lots of
 bugfixes and improvements.



The&amp;nbsp;changelog&amp;nbsp;contains
all the improvements and bugfixes made in the kernel.



Below
mentioned are some important fixes and improvements.



drm/i915:
 fix FORCEWAKE posting reads

ALSA: hda - Fix
 non-snoop page handling

ALSA: hda - fix
 inverted internal mic on Acer </summary><link rel='replies' type='application/atom+xml' href='http://hashprompt.blogspot.com/feeds/1088618283911378143/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://hashprompt.blogspot.com/2013/03/upgrading-linux-kernel-376-stable.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1088618283911378143'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2807468768188634584/posts/default/1088618283911378143'/><link rel='alternate' type='text/html' href='http://hashprompt.blogspot.com/2013/03/upgrading-linux-kernel-376-stable.html' title='Upgrading Linux Kernel 3.7.6 (Stable) release in Ubuntu 12.10'/><author><name>Baban Gaigole</name><uri>http://www.blogger.com/profile/10440029644921138328</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total><georss:featurename>Hyderabad, Andhra Pradesh, India</georss:featurename><georss:point>17.385044 78.486671</georss:point><georss:box>-35.729976 -4.1305164999999988 70.500064000000009 161.1038585</georss:box></entry></feed>