Creating an AMI Image as a backup with Python

We all know the importance of having current backups. Let’s take a look at programatically selecting a server based on name tag (in my case I decided to backup the private git server we setup previously).

We can also utilize a similar setup to create load balancing servers for our web apps, or we can use this similar to docker.

Let’s make do our imports. I decided to use boto2 for ease of sorting instance tags.

#!/usr/bin/python
# -*- coding: utf-8 -*-
#import our dependencies
import boto
from boto import ec2
from boto.ec2 import connection, connect_to_region
import sys
import os
import uuid

class AMICreation(object):

def __init__(self):
#this section is for windows users storing their keys in environment variables
#os.environ[‘AWS_ACCESS_KEY_ID’]
#os.environ[‘AWS_SECRET_ACCESS_KEY’]
#os.environ[‘AWS_DEFAULT_REGION’] = ‘us-west-2’

self.connection = connect_to_region(‘us-west-2’)
#now we iterate through the open instances
self.instances = [i for r in
self.connection.get_all_instances() for i in
r.instances]
self.action = ‘failure’
self.has_error = ‘no’
self.instance = None
#create a function to create image id
def create_image_id(
self,
instance=None,
description=None,
no_reboot=None,
ami_name=None,
):
#name name our image
image_id = instance.create_image(ami_name,
description=description, no_reboot=no_reboot)

if ‘ami’ in image_id:
print image_id

return ‘success’
else:

return ‘failure’

def find_instance_id_and_create(
self,
servername,
descritption,
no_reboot,
):
#iterate through our instances and find the name tag matching “gitserver”
for i in self.instances:

if ‘Name’ in i.tags:

state = i.state

name = i.tags[‘Name’]

instance_id = str(i.id)

print name, state, instance_id

if name.lower() == servername.lower():

ami_name = servername.lower() + ‘-‘ \
+ str(uuid.uuid4().fields[-1])[:5]

status = self.create_image_id(i, str(descritption),
str(no_reboot), str(ami_name))

if status == ‘success’:

self.has_error = ‘no’

return self
else:

self.has_error = ‘yes’

return self
else:

self.has_error = ‘no instances named %s’ \
% servername.lower()

AMICreator = AMICreation()

AMICreator.find_instance_id_and_create(‘gitserver’,
‘this is a git server backup base image’, ‘False’)

if str(AMICreator.has_error) == ‘no’:
print ‘success’
else:
print AMICreator.has_error

Now we have created a backup image of our EC2 instance. Now we can return to a running state at this point in time easily if we need to.

Setting up a private GIT Server

Git is a versioning system that is used by millions of users around the world. Developed by Linus Torvalds in April of 2005, Git is used for over 21.8 million repositories.

Why not just use Github? Was the first question I asked when considering why I should write this article. Github along with other hosted repository services usually allow only a few private repositories. This provides a dilemma for the little man, should we pay for more private repos , should we spread our repositories out over multiple services or should we host our own private git server?

There are benefits to hosting your own git server. Unlimited private repos, the possibility to have more control for each user and group privileges, just to name a couple. Now that we have looked at the options available and weighed the pros and cons of each service, maybe you have decided to host your own git server.

First things first, which open source git server should we use? I decided to utilize GitLab, being open source, and readily available it also has a web based GUI.

Before we install GitLab, I recommend installing Postfix and setting up SMTP email server so that GitLab can push emails when needed.

Assuming you have already installed and setup Postfix, Let’s move on to GitLab.

Download the packages using wget. Then install the package:

wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.9.4-omnibus.1-1_amd64.deb
sudo dpkg -i gitlab_7.9.4-omnibus.1-1_amd64.deb

Now we need to configure GitLab:

sudo gitlab-ctl reconfigure
nano /etc/gitlab/gitlab.rb

Edit the ‘external_url’, give the server domain, and save the file.

gitlab-2

In your web browser, open your GitLab site, using ‘root; for the system admin and ‘5iveL!fe’ for the password. Change your password after your first login for obvious security reasons.

Thank you for utilizing this quick and simple installation guide for installing and setting up your own private git server.

Automating ELK Stack Installation

Last time we installed an ELK stack on AWS. Today let’s setup an automation script using Python 2.7 to automate the installation of an Elk server.

Let’s make our calls for necessary modules.

import os
import boto3

We set up our access keys using environment variables so we don’t accidentally publish this information to a public repository. Then set the region we want our AWS EC2 instance.

os.environ[“AWS_ACCESS_KEY_ID”]
os.environ[“AWS_SECRET_ACCESS_KEY”]
os.environ[“AWS_DEFAULT_REGION”] = “us-west-2”

Let’s create the bash script that will pass to our instance once it is created. Our bash commands need to contain information for installing Java, creating the repositories for ElasticSearch, Logstash, and Kibana. We also need to include commands for starting our services and configuring the config files.

#Bash commands for installing elk stack
userdata = “””#!/bin/bash
sudo su
cd ~
wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm”
yum -y localinstall jdk-8u73-linux-x64.rpm
rpm –import http://packages.elastic.co/GPG-KEY-elasticsearch
#create new repo for elasticsearch using rawgit.com to create a downloadable link to get the file needed
wget “https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/elasticsearch.repo” -P /etc/yum.repos.d/
#install elastic search
yum -y install elasticsearch
#edit elasticsearch config
sed -i ‘$network.host: localhost’ /etc/elasticsearch/elasticsearch.yml
service elasticsearch start
service enable elasticsearch
#kibana
#add kibana repo used rawgit.com to create a downloadable link to get the file needed
wget ‘https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/kibana.repo’ -P /etc/yum.repos.d/
#install kibana
yum -y install kibana
#edit kibana config
sed -i ‘$server.host: “localhost”‘ /opt/kibana/config/
#start kibana
service kibana start
#install logstash
#add logstash repo
wget ‘https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/logstash.repo’ -P /etc/yum.repos.d/
#install logstash
yum -y install logstash
service logstash start
“””

Finally we create and start our instance, pass our bash script and return needed information about our server.

#creating the ec2 instance on AWS using a predefined security group, t2 micro size, and amazon linux machine image
ec2 = boto3.resource(‘ec2′)
instances = ec2.create_instances(
ImageId=’ami-7172b611′,
InstanceType=’t2.micro’,
KeyName=’AWS_Testing’,
MinCount=1,
MaxCount=1,
SecurityGroupIds=[‘Jenkins’],
UserData=userdata,
)
#start the instance and print to command instance id, state, public dns, public ip
for instance in instances:
print(“Waiting until running…”)
instance.wait_until_running()
instance.reload()
print((instance.id, instance.state, instance.public_dns_name,
instance.public_ip_address))

Running this script from command line will start our ELK stack automated installation on a new AWS EC2 Instance.

Up next: creating our own private GIT server.

Setting up an ELK Stack on AWS

ELK Stack – this was a new term to me before I undertook this process, it seems overwhelming the first time you take on a new task.

ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is a NoSQL database that allows NRT (near real time) queries. Kibana offers a nice interactive interface for analyzing data contained in the Elasticsearch data. Logstash is the intermediary between Elasticsearch and Kibana.

ELK has a large open source community, making this set of utilities quite popular. There are plenty of guides out there and the documentation is helpful. This article will not cover using an ELK stack in a production evironment, we will be setting up a test stack and getting familiar with the process. However, to set up an ELK stack for a production environment would not need too much changing of this process.

 

Getting Started:

Every component of our ELK stack requires Java. Let’s get busy and start setting up java on an Ubuntu AWS instance via SSH and shell commands. Make sure you have root access: sudo su

Installing Java:


  1. apt-get update
  2. apt-get upgrade
  3. apt-get install openjdk-7-jre-headless

Installing Elasticsearch:


  1. wget qO https://packages.elastic.co/GPGKEYelasticsearch | sudo aptkey add
  2. echo “deb http://packages.elastic.co/elasticsearch/1.7/debian stable main” | sudo tee a /etc/apt/sources.list.d/elasticsearch1.7.list
  3. apt-get update
  4. apt-get install elasticsearch
  5. service elasticsearch restart

Installing Logstash:


  1. echo “deb http://packages.elasticsearch.org/logstash/1.5/debian stable main” | sudo tee a /etc/apt/sources.list
  2. apt-get update
  3. apt-get install logstash
  4. service logstash start

Create config file for logstash:


vi /etc/logstash/conf.d/10-syslog.conf

  1. input {
  2. file {
  3. type => “syslog”
  4. path => [ “/var/log/messages”, “/var/log/*.log” ]
  5. }
  6. }
  7. output {
  8. stdout {
  9. codec => rubydebug
  10. }
  11. elasticsearch {
  12. host => “localhost” # Use the internal IP of your Elasticsearch server
  13. # for production
  14. }
  15. }
  16. :wq

service logstash restart


Kibana Installation:


  1. wget https://download.elastic.co/kibana/kibana/kibana4.1.1linuxx64.tar.gz
  2. tar -xzf kibana-4.1.1-linux-x64.tar.gz
  3. cd /kibana-4.1.1-linux-x64/
  4. mkdir -p /opt/kibana
  5. mv kibana-4.1.1-linux-x64/* /opt/kibana
  6. cd /etc/init.d && sudo wget https://raw.githubusercontent.com/akabdog/scripts/master/kibana4_init O kibana4
  7. chmod +x /etc/init.d/kabana4
  8. service kibana4 start

Testing our installs:

Point your browser to ‘http://YOUR_ELASTIC_IP:5601’ after Kibana is started

Using Python to Automate Jenkins Install on AWS EC2 Instance

One of the main goals for a DevOps professional is automation. This week I was given a “simple” task, I was supposed to write a script that would login to AWS, create an instance, and install Jenkins.

Why would I want to do all that work when there are GUIs to assist with this process?

Automation is the key, when you may be faced with repetitive tasks automation just makes sense.

For the purpose of this tutorial is is assumed that you already have an AWS account, SecretKey, SecretAccessID, a security group policy already set up to accept incoming on port 8080, Python 2.7, and Boto3 installed.


import os

import boto3


First we need to make our calls. Creating an instance requires the os module and Boto modules, I decided to utilize boto3.


# note, later I made these system environment calls so they
# aren't accidentally published in a public repository.
os.environ["AWS_ACCESS_KEY_ID"]
os.environ["AWS_SECRET_ACCESS_KEY"]
os.environ["AWS_DEFAULT_REGION"] = "us-west-2"

This sets the access key, secret key and default region. You can change the region to wherever you need.


userdata = """#!/bin/bash
yum update -y
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
yum install jenkins -y
service jenkins start
chkconfig jenkins on

"""

Now we pass our #!bash script through userdata. We start by updating yum. The next two lines are adding the Jenkins repository. Next we install Jenkins. Finally we start Jenkins as a service.
Important note: any commands sent that will require user input need to include the input. Ex: yum requires -y


ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
 ImageId='ami-7172b611',
 InstanceType='t2.micro',
 KeyName='AWS_Testing',
 MinCount=1,
 MaxCount=1,
 SecurityGroupIds=['Jenkins'],
 UserData=userdata
)

Now we need to define our instance to be created. You may use a different AMI, KeyName, SecurityGroup.


for instance in instances:
 print("Waiting until running...")
 instance.wait_until_running()
 instance.reload()
 print((instance.id, instance.state, instance.public_dns_name,
instance.public_ip_address))

Now we put everything together and return information about our instance.

Put everything together and test it out. Don’t forget to SSH into the instance and get the Jenkins default password.

The next task given to me is to set up an ELK Stack… Stay tuned.

Developing A Basic Understanding

The IT industry is a constantly changing environment. Adapt and survive or else, should be the motto. When I started in computers and technology a state of the art machine was a Franklin PC 5000, which included dual 5.25 floppy drives, 64k ram, a VGA monitor, and ran Disk Operating System (DOS). Basic was the language to learn.

It is truly amazing how fast computers have changed since then. Recently, I decided that I would take up learning Python 2.7 in pusuit of a career in the DevOps field. I have achieved this through the use of http://www.codeacademy.com and http://learnpythonthehardway.org . Both of these resources are excellent at learning the fundamentals.

DevOps is a new term utilizing Agile and Lean methodologies to bring development and operations together. There is the “CAMS” (Culture, Automation, Measurement and Sharing) acronym popularized by John Willis and Damon Edwards. When you think DevOps, think continuous, think automated, think security, think network.

After much research, I have found a list of tools that I will be showing others how to use, install and setup. Follow along and maybe you can learn something useful along the way.