Archive for the ‘yum’ Tag

Creating a puppet ready image (CentOS/Fedora)   10 comments

Cloud computing and being lazy

The need to create template images in our cloud environment is obvious. Especially with Amazon EC2 offering an amazing API and spot instances in ridiculously low prices.
In the following post I’ll show what I am doing in order to prepare a “puppet-ready” image.

Puppet for the rescue

In my environment I have puppet configured and provisioning any of my machines. With puppet I can deploy anything I need – “if it’s not in puppet – it doesn’t exist”.
Coupled with Puppet dashboard the interface is rather simple for manually adding nodes. But doing stuff manually is slow. I assume that given the right base image I (and you) can deploy and configure that machine with puppet.
In other words, the ability to convert a bare machine to a usable machine is taken for granted (although it is heaps of work on its own).

Handling the “bare” image

Most cloud computing providers today provide you (usually) with an interface for starting/stopping/provisioning machines on its cloud.
The images the cloud providers are usually supplying are bare, such as CentOS 6.3 with nothing. Configuring an image like that will require some manual labour as you can’t even auto-login to it without some random password or something similar.

Create a “puppet ready” image

So if I boot up a simple CentOS 6.x image, these are the steps I’m taking in order to configure it to be “puppet ready” (and I’ll do it only once per cloud computing provider):

# install EPEL, because it's really useful
rpm -q epel-release-6-8 || rpm -Uvh http://download.fedoraproject.org/pub/epel/6/`uname -i`/epel-release-6-8.noarch.rpm

# install puppet labs repository
rpm -q puppetlabs-release-6-6 || rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-6.noarch.rpm

# i usually disable selinux, because it's mostly a pain
setenforce 0
sed -i -e 's!^SELINUX=.*!SELINUX=disabled!' /etc/selinux/config

# install puppet
yum -y install puppet

# basic puppet configuration
echo '[agent]' > /etc/puppet/puppet.conf
echo '  pluginsync = true' >> /etc/puppet/puppet.conf
echo '  report = true' >> /etc/puppet/puppet.conf
echo '  server = YOUR_PUPPETMASTER_ADDRESS' >> /etc/puppet/puppet.conf
echo '  rundir = /var/run/puppet' >> /etc/puppet/puppet.conf

# run an update
yum update -y

# highly recommended is to install any package you might deploy later on
# the reason behind it is that it will save a lot of precious time if you
# install 'httpd' just once, instead of 300 times, if you deploy 300 machines
# also recommended is to run any 'baseline' configuration you have for your nodes here
# such as changing SSH port or applying common firewall configuration for instance
yum install -y MANY_PACKAGES_YOU_MIGHT_USE

# and now comes the cleanup phase, where we actually make the machine "bare", removing
# any identity it could have

# set machine hostname to 'changeme'
hostname changeme
sed -i -e "s/^HOSTNAME=.*/HOSTNAME=changeme" /etc/sysconfig/network

# remove puppet generated certificates (they should be recreated)
rm -rf /etc/puppet/ssl

# stop puppet, as you should change the hostname before it will be permitted to run again
service puppet stop; chkconfig puppet off

# remove SSH keys - they should be recreated with the new machine identity
rm -f /etc/ssh/ssh_host_*

# finally add your key to authorized_keys
mkdir -p /root/.ssh; echo "YOUR_SSH_PUBLIC_KEY" > /root/.ssh/authorized_keys

Power off the machine and create an image. This is your “puppet-ready” image.

Using the image

Now you’re good to go, create a new image from that machine and any machine you’re going to create in the future should be based on that image.

When creating a new machine the steps you should follow are:

  • Start the machine with the “puppet-ready” image
  • Set the machine’s hostname
  • hostname=uga.bait.com
    hostname $hostname
    sed -i -e "s/^HOSTNAME=.*/HOSTNAME=$hostname/" /etc/sysconfig/network
    
  • Run ‘puppet agent –test’ to generate a new certificate request
  • Add the puppet configuration for the machine, for puppet dashboard it’ll be something similar to:
  • hostname=uga.bait.com
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:add name=$hostname
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:groups name=$hostname groups=group1,group2
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:parameters name=$hostname parameters=parameter1=value1,parameter2=value2
    
  • Authorize the machine in puppetmaster (if autosign is disabled)
  • Run puppet:
    # initial run, might actually change stuff
    puppet agent --test
    service puppet start; chkconfig puppet on
    

This is 90% of the work if you want to quickly create usable machines on the fly, it shortens the process significantly and can be easily implemented to support virtually any cloud computing provider!

I personally have it all scripted and a new instance on EC2 takes me 2-3 minutes to load + configure. It even notifies me politely via email when it’s done.

I’m such a lazy bastard.

Posted March 23, 2013 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , , , , , , ,

got r00t?   5 comments

Introduction

Landing in a new startup company has its cons and pros.
The pros being:

  1. You can do almost whatever you want

The cons:

  1. You have to do it from scracth!

The Developers

Linux developers are not dumb. They can’t be. If they were dumb, they couldn’t have developed anything on Linux. They might have been called developers on some other platforms.
I was opted quite early about the question of:
“Am I, as a SysAdmin, going to give those Linux developers root access on their machines?”

Why not:

  1. They can cause a mess and break their system in a second.
    A fellow developer (the chowner) who ran:

    # chown -R his_username:his_group *
    

    He came to me saying “My Linux workstation stopped working well!!!”
    Later on I also discovered he was at /, when performing this command! 🙂
    For his defence he added: “But I stopped the command quickly! after I saw the mistake!”

  2. And there’s no 2, I think this is the only main reason, given that these are actually people I generally trust.

Why yes:

  1. They’ll bother me less with small things such as mounting/umounting media.
  2. If they need to perform any other administrative action – they’ll learn from it.
  3. Heck, it’s their own workstation, if they really want, they’ll get root access, so who am I to play god with them?

Choosing the former and letting the developers rejoice with their root access on their machines, I had to perform some proactive actions in order to avoid unwanted situations I might encounter.

Installation

Your flavor of installation should be idempotent, in terms of letting the user destroy his workstation, but still be able to reinstall and get to the same position.
Let’s take for example the chowner developer. His workstation was ruined. I never even thought of starting to change back permissions to their originals. It would cause much more trouble in the long run than any good.
We reinstalled his workstation and after 15 minutes he was happy again to continue development.

Automatic network installations are too easy to implement today on Linux. If you don’t have one, you must be living in the medieval times or so.
I can give you one suggestion though about partitioning – make sure your developers have a /home on a different partition. It’ll be easier when reinstalling to preserve /home and remove all the rest.

Consolidating software

I consider installing non-packaged software on Linux a very dirty action.
The reasons for that are:

  1. You can’t uninstall it using standard ways
  2. You can’t upgrade it using standard ways
  3. You can’t keep track of it

In addition to installing packaged software, you must also have all your workstations and server synchronize against the same software repositories.
If user A installs software from repository A and user B from repository B, they might run into different behavior on their software.
Have you ever heard: “How come it works on my computer and doesn’t work on yours??”
As a SysAdmin, you must eliminate the possibilities of this to happen to a zero.

How do you do it?
Well, using CentOS – use a YUM repository and cache whatever packages you need from the various internet repositories out there.
Debian? – just the same – just with apt.

Remember – if you have any software on workstations that is not well packaged or not well controlled – you’ll run into awkward situations very soon.

Today

Up until today Linux developers in my company still posses their root access, but they barely use it. To be honest I don’t think they even really need it. However, they have it. It is also about educating the developers that they are given the root access because they are being trusted. If they blew it, it’s mostly their fault, not yours.

I’ll continue to let them be root when needed. They have proved worthy so far.
And I’ll ask you another question – do you really think that someone who can’t handle his own workstation be a good developer? – think again!

Posted December 5, 2009 by malkodan in System Administration

Tagged with , , , , , , , , , ,

Design a site like this with WordPress.com
Get started