Uncategorized

Run Appscale on Eucalyptus

w00T!!!!!

Shaon's avatarshaon's blog

Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). – Wikipedia

According to Wikipedia currently there are few popular service models exist.

1. Infrastructure as a service (IaaS)
2. Platform as a service (PaaS)
3. Software as a service (SaaS)

So, I have an Eucalyptus cloud, which is great, serves as AWS-like IaaS platform. But now I want PaaS. And right here Appscale comes into play with full compatibility of Google App Engine (GAE) applications. In this post, we will install the popular open source PaaS framework Appscale on Eucalyptus, the AWS compatible open source IaaS platform.

Agenda
0. Introduction
1. Resize Lucid image
2. Install Appscale from source
3. Install Appscale Tool
4. Bundle Appscale image
5. Run Appscale
6. Run an application on Appscale

Eucalyptus
Eucalyptus Cloud platform is open source software for building…

View original post 541 more words

Standard
Uncategorized

Our little cloud boxes

Get some.

Greg DeKoenigsberg's avatarGreg DeKoenigsberg Speaks

A lot of people have been visiting our table in the OSCON Hack Zone — mostly because of the presence of our Little Black Boxes.

The common question we’ve heard: “where did you guys *get* those things?”

INORITE? They are *totally* cute.  We bought the parts and assembled them ourselves.  They are now Standard Issue to all new Eucalyptus engineers; a short stack of three gives any engineer enough firepower to do serious development and testing on the whole Eucalyptus stack.

Here’s the parts list from Amazon.com, courtesy of the talented and ruggedly handsome @zacharyjhill:

The main housing unit is an Intel NUC, about 4″ by 4″ by 2″. The SSD is available in different sizes; ours is 128GB. With some of the boxes, we only use 8GB of RAM and with others we use 16GB. We also like to have wireless, though it’s not required — and don’t forget the cheapo 

View original post 53 more words

Standard
Eucalyptus

Getting Started with EucaLobo

Initial Setup

In my previous post, I described the story behind EucaLobo, a graphical interface for managing workloads on AWS and Eucalyptus clouds through a <cliche>single pane of glass</cliche>. The tool is built using Javascript and the XUL framework allowing it to be used on Linux, Windows, and Mac for the following APIs:

  • EC2
  • EBS
  • S3
  • IAM
  • CloudWatch
  • AutoScaling
  • Elastic Load Balancing

To get started download the binary for your platform:

Once installation is complete and EucaLobo starts for the first time you will be prompted to enter an endpoint. My esteemed colleague Tony Beckham has created a great intro video showing how to create and edit credentials and endpoints. The default values have been set to the Eucalyptus Community Cloud, a free and easy way to get started using Eucalyptus and clouds in general. This is a great resource for users who want to get a feel for Eucalyptus without an upfront hardware investment.

Enter the following details if you have your own cloud or would like to use AWS:

After entering an endpoint, the next modal dialog will request that you enter your credentials:

  • Name: Alias for these credentials
  • Access Key
  • Secret Key
  • Default Endpoint: Endpoint to use when these credentials are activated
  • Security Token: Unnecessary for most operations

Any number of endpoints and credentials can be added which makes EucaLobo ideal for users who leverage multiple clouds (both public and private). Once you have loaded up at least one endpoint and credential set, you need to:

  1. Go to the “Manage Credentials” tab
  2. Select a credential in the top pane
  3. Click the “Activate” button

You are now ready to start poking around the services available through EucaLobo. All services are listed on the left pane of the interface. Clicking on the name of the tabs will take you to the implementation of that functionality. The ElasticWolf team did a great job of making an intuitive and simple interface to navigate. As an enhancement, which I hope to get upstream soon, I have added labels to all buttons in the UI so that it is clear which operations will be executed.

Cool Features

Portability

ElasticWolf leverages the XUL framework which enables developers to write their application once and deploy it on Mac/Linux/Windows or any platform that supports Firefox. This level of portability is great to cover a large number of users with minimal effort. So far I have not found any bugs that are platform specific.

Multi-cloud

EucaLobo makes it easy to quickly change endpoints and credentials. My common use cases for this feature are:

  • Switching only endpoints – Switching regions in AWS
  • Switching both endpoint+credentials: – Verifying Eucalyptus behavior after testing in the same interface as AWS
  • Switching only credentials – Use different users to validate IAM behavior

multi-cloud

IAM Canned policies

One of the great workflows inherited from ElasticWolf is the ability to use pre-canned policies when associating a policy to users and groups.

canned-policy

Security features

You may be thinking that adding cloud credentials to an application and leaving it open on your desktop is too risky. You would be absolutely correct. To combat this risk, you can set an inactivity timer that will either exit the application or require the user to enter a preset password. The granularity of the timer can be set to as low as 1 minute.

security

S3 advanced features

One of the most powerful features in the S3 API is the ability to lock down (or open up) S3 entities (objects and buckets) using an ACL policy language. Unfortunately, the S3 ACL API is not the most user friendly. With the ACL implementation in EucaLobo, you can choose to share a file publically or share with only 1 or more individual users.

s3-acls

CloudWatch Graphs

The reason I began my efforts to get ElasticWolf working with Eucalyptus was in order to use it as an interface to the newly developed CloudWatch API in Eucalyptus. EucaLobo makes it extremely easy to visualize the usage of each of your instance, volumes, load balancers, and AutoScaling groups.

cloudwatch

Conclusion

EucaLobo has been extremely useful for me during the testing of Eucalyptus 3.3, as well as for managing my home private cloud and AWS accounts. I hope that others can find it as useful and useable as I have. With what I have learned during the development of EucaLobo, I hope to refork ElasticWolf  so that I can make a smaller patch upstream for enabling Eucalyptus Cloud support.

Please dont hesitate to provide feedback in the form of comments on this blog, on Github as issues, or on IRC at the #eucalyptus-qa channel of Freenode. As always pull requests are welcome: https://github.com/viglesiasce/EucaLobo

Standard
Eucalyptus

The Journey to EucaLobo

The 3.3.0 feature barrage

As a quality engineer it is always useful to have an at-a-glance view of the state of your system under test. Unfotunately, having reliable graphical tools is not always possible during testing phases as the UI is often trailing the development of core features. During the 3.3.0 release, the Eucalyptus development team added an incredible amount of API calls to its already large catalog of AWS compatible operations:

  • Elastic Load Balancing
  • Autoscaling
  • CloudWatch
  • Resource Tagging
  • Filtering
  • Maintenance Mode
  • Block Device Mappings

As a result of this onslaught of new service functionality from developers the UI and QA teams had their work cut out for them. The UI team had decided early on that they needed to make some architectural changes to the UI code, such as leveraging Backbone.js and Rivets. This meant they would only be able to cover the newly added resource tagging and filtering services within the 3.3.0 timeframe. Unfortunately, the UI was not the only client tool that needed to implement new services as Euca2ools 2.x was also lacking support for ELB, CloudWatch, and Autoscaling. As we split up the services amongst the quality engineers it became apparent that we had an uphill battle ahead and would need every advantage we could get. I took the lead for the CloudWatch service and began my research as soon as the feature had been committed to the release. In reading about and using the AWS version of CloudWatch it became clear that the service basically boiled down to:

  1. Putting in time series data
  2. Retrieving statistics on that data over a given interval at a set periodicity

Having worked with time series data before, I knew that without a way to visualize it I would be seriously hindering my ability to verify resulting metrics. I pulled out my handy recipe for Graphite and wrote a simple bash script that would grab a CloudWatch data set from a file and send it to my Graphite server using netcat. This worked as a quick proof of concept that we were storing the correct data and computing its statistics properly over longer periods. One of the major functionalities that is provided by the CloudWatch service is instance monitoring. This data allows users to make educated decisions about how and when to scale their applications. The realtime nature meant that I needed to be able to create arbitrary load patterns on instances and volumes and quickly map that back to CloudWatch data. It became clear that a bash script pulling from a set of text files was not going to be simple or flexible enough for the task.

Let the hacking begin

As I began looking around for CloudWatch visualizers, it was clear that not many people had attacked the problem, likely because the AWS Console implementation is solid. One project that almost immediately bubbled to the top, however, was ElasticWolf, the AWS console developed for use with GovCloud. This project had been around for a year or so and had managed to implement a graphical interface for every single service that AWS supported, including AutoScaling, which is still not found in the AWS Console. It seemed like it would not take much time to point the ElasticWolf interface at my Eucalyptus cloud, so I took a stab at the Javascript code that backs the XUL application and ended up with a working version within 24hrs.  This timeline from cloning my first repo to using EucaLobo as my daily driver is a testament to the API fidelity that Eucalyputs provides.  At that point, I had hardcoded many things in the code that made it no longer work with AWS, fortunately at the time hybrid functionality was irrelevant.  A few weeks later when I had a better idea of how the code was structured and how I could manipulate the UI elements, I was able to reimplement the credential and endpoint management such that it would allow hybrid functionality. This was another great advantage for our team in that we could now run the exact same operations on both AWS and Eucalyptus and compare the results through the same interface. ElasticWolf was also quite useful in defining the workflows that were common to the new services we had implemented. For example, its UI will ensure that there are launch configurations created before you attempt to create an autoscaling group. These types of guard rails allowed us to efficiently learn and master the new features with a low barrier to entry in order to deliver a high quality release within our schedule.

In my next post I will show how to get started with EucaLobo as well as highlight some of its features.

Standard
Eucalyptus, QA

Introducing Micro QA

MicroQA-homepage

I have devoted my last 2 years to testing Eucalyptus. In that period the QA team and I have gone through many iterations of tools to find those that make us most efficient. It has become a never ending and enjoyable quest.

We have evolved our testing processes through the following stages:

  1. Using command line tools exclusively
  2. Writing scripts that call command line tools and parsing their output
  3. Writing scripts using a library to make test creation easier and more efficient without the need for command line tools
  4. Running scripts through a graphical tool in order to make test execution more flexible and simple

Each of these iterations was fueled by some tool chain that came along to solve a problem. My journey at Eucalyptus started between the second and third stages. Euca2ools were the go to favorite for manual testing and there was a library aptly named ec2ops floating around that wrapped euca2ools commands using Perl. Being that boto was backing euca2ools at the time I figured I would take a stab at  creating a library calling boto directly that we could start to build our tests with, and thus Eutester was born entering us into the third phase. Once we had reached this point, we were able to quickly write and execute idempotent tests from the command line. After manual execution of our tests was passing consistently we were able to parametrize, run, and parallelize our tests using Jenkins. At this fourth phase we have taken a snapshot of our environment and are now able to share it with the rest of the community through our image catalog.

A few of the use cases Micro QA can help with are:

  • Regression testing during development
  • Functional testing after initial installation
  • Load and stress testing before going into production
  • Development platform for Eutester test cases

Benefits for users of Micro QA include:

  • Known working automated tests
  • Constantly increasing number of test scenarios
  • Flexibility to add custom tests as needed for use cases which aren’t covered

How it works

The environment starts with an Ubuntu Precise Guest Image from Ubuntu’s cloud-images. Once downloaded, registered and started, I installed the Jenkins package. After Jenkins was up and running I installed a redirect for port 8080 to 80 so that users would not need to remember a port in order to access their Micro QA image and could simply hit: http://<instance-public-ip&gt;.  Once the Jenkins instance was reachable by the standard HTTP port (80) we began to add the dependencies for Eutester to the guest OS. The typical environment for Eutester scripts requires boto, paramiko, and virtualenv to isolate the script runtime environment. Once the Python dependencies were successfully installed into a virtualenv we then setup our projects in the Jenkins install. The jobs for Eutester and Eutester4j were then created with only a single required parameter, namely the contents of the eucarc file generated by the cloud. Each script checks out its own environment so that both development and stable Eutester versions can run side by side.

Installation

In order to install the Micro QA image follow the instructions here: https://github.com/eucalyptus/micro-qa/blob/master/README.md

Usage

  1. Once at the main page of the Micro QA instance, click the “Build” button on the right side of the Instance Suite project.
  2. Enter your eucarc file into the text box on the next screen
  3. Hit build at the bottom of the page
  4. You will be taken to the currently running job. Click the blue bar under the job timestamp on the left.
  5. Here you will see the console output for the test run.
  6. When the script completes, the bottom of the console output will display a summary of the results.
Standard
Uncategorized

Nice work by Lester! This will make deployment of your Euca cloud much much easier!

Axel Stone's avatarTake that to the bank and cash it!

The first cut of the Ansible deployment playbook for deploying Eucalyptus private clouds is ready.  I’ve merged the first “release” into the master branch here: https://github.com/lwade/eucalyptus-playbook. Feedback and contributions are very welcome, please file issues against the project.

This playbook allows a user to deploy a single front-end cloud (i.e. all component on a single system) and as many NC’s as they want.  Although admittedly I’ve only tested with one so far.  I’ve followed, to a certain degree, the best practices described here:  http://ansible.cc/docs/bestpractices.html

Overall I’m pretty happy with it, there are some areas which I’d like to re-write and improve but the groundwork is there.  It was all very fluid to start with and doing something multi-tier has been enjoyable. I’ve also learnt what its like to automate a deployment of Eucalyptus and there are probably a number of things we should improve to make it easier in this…

View original post 614 more words

Standard
Uncategorized

Beautiful use case for Eucalyptus.

Harold Spencer Jr.'s avatarMore Mind Spew-age from Harold Spencer Jr.

Recently, I did a blog discussing how to deploy a Jenkins server using Stackato, running on Eucalyptus.  At the end of that blog, I mentioned how the Eucalyptus Community Cloud (ECC) could be used for testing out the Stackato Microcloud image on Eucalyptus.   The previous blog – I felt – was more for DevOps administrators who had access to their own on-premise Eucalyptus clouds.  The inspiration of this blog comes from the blog on ActiveBlog entitled “Deploy & Scale Drupal on Any Cloud with Stackato” to show love to Web Developers, and show the power of Amazon’s Route 53.

Test Drive Pre-Reqs

The prerequisites for this blog are the same that are mentioned in my previous blog regarding using Stackato on Eucalyptus (for the Eucalyptus pre-reqs, make sure the ECC is being used).  In addition to the prerequisites mentioned above, the following is needed:

  • An…

View original post 643 more words

Standard
Uncategorized

This will certainly become a tool that I use day to day. Thanks nurmi!

dnurmi's avatarnurmiblog

Production deployments of Eucalyptus, like production deployments of any infrastructure software running in a data center, require some amount of health and status monitoring be happening in order to both allow the Eucalyptus/data-center administrator the ability to stay on top of evolving resource situations and to provide invaluable diagnostic information when something is going sideways within the resource pool.  Fortunately for all of us, there exists a wide variety of health/status monitoring system out there, and several of them are of extremely high quality, tried and tested, and are available as part of major Linux distributions as pre-packaged open-source solutions.  One such system that I’m a personal fan of is called Nagios.

To quote from their website:

“Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.”

Indeed it is!  I first used Nagios is 2000/2001…

View original post 1,182 more words

Standard
Eucalyptus

Using Scalr for Automation of your Eucalyptus Cloud

Introduction

scalr_logo

 

Euca logo

I have been using Eucalyptus heavily (as a quality engineer it is my day to day) for the past 1.5 years. I know the ins and the outs of system and am constantly tracking new features and bug fixes that arrive. With this knowledge it makes me a prime candidate to find out how other pieces of the cloud story can integrate with Eucalyptus.

I run a small cloud at home that I use for development and testing of different software stacks. Some of the tools that I’ve learned to use and hack on since Ive turned up my cloud include Graphite, Gitorious, Jenkins, Testlink, Zenoss. The issue with getting most of these (and any) open-source tools running is that they often require a very particular  base OS and dependency versions in order to install cleanly. This makes Eucalyptus a great tool for figuring out the right/easiest way to deploy an open source tool as it allows users to create and save any particular stack they have created for later use. Another great part about using Eucalyptus as a development tool is that it allows any and all distros to be loaded into a cloud and available for use by many users. When I look into a new tool to use I can rest assured that I can find one of their supported distros in my Eucalyptus cloud. Currently I have images registered for Debian 5/6, Ubuntu 12.04/12.10/13.04, Fedora 17, CentOS 6, and begrudgingly Windows 7 (99% used to manage vCenter nodes yuck).

There are many ways to populate and provision application stacks in a cloudy model. One way would be to load all of the application stack onto an image then re-bundle it and save it off. Another approach, and the one that I rely on, is to populate base images that can then be provisioned by scripts in order to run the application of choice. In using this model I had come up with many different scripts/user-data to populate the images listed above with applications. My approach allowed for apps to be deployed easily but I ended up with lots of replicated code across the user-data that I was passing to instances. Another pain was that it required me to remember which user-data scripts belonged with which images. Although not impossible to turn up and down my apps, it was definitely cumbersome and certainly not push button.

Enter Scalr.

Scalr allows me to populate scripts into a common database that is shared across all images/clouds. Because Scalr allows multiple scripts to be run during instance turn up (and manually for that matter), I am now able to modularize my scripts to reduce the amount of code I need to maintain.  Another element of Scalr that makes my life easier is its ability to auto scale an application. For my purposes I am not scaling an app to more than 1 instance, but if I ever bring down an instance erroneously Scalr will automatically repopulate the instance and run the scripts necessary to reconstruct the app. The final efficiency I get out of using Scalr is that I can load up multiple clouds and use the same scripts against similar base images within various clouds. As a tester I bring up and down at least 1 private cloud a day, on top of using my AWS account for both testing and production use, so this may be the last feature but certainly not least in my mind.

Eucalyptus+Scalr is a developer/tester’s dream, so lets get down to business on how to get this solution running on your own cloud.

Install Scalr

  1. Run the script found here on a Ubuntu Precise 12.04 server or instance: https://gist.github.com/4527791
  2. You will be prompted a few times to enter the password listed in the script (default is yourpassword)
  3. Ensure that the default virtual host (scalr.local) can be resolved by either adding a DNS entry to the server or a host entry on your client. Default hostname can be changed on lines 4-5 in the script from step 1.
  4. Login to Scalr as admin/admin: http://scalr.local
  5. Click on Accounts in the top left corner and create a user account
  6. Click Settings->Core Settings, and set the “Event handler URL” at the bottom to the hostname you chose in step 3 (default is sclar.local)
  7. Logout and back in with your user credentials created in step 5

Scalrize Your Images

You need to install an agent, Scalarizr, on any  any images used to create server templates, or in Scalr parlance roles. After installing the Scalarizr agent you will register each of your images as a role. To install Scalarizr:

  1. Install the repository package as appropriate for your distro: Debian  RHEL/CentOS
  2. Install the Scalarizr package appropriate to the cloud that will be used. In our case “scalarizr-eucalyptus”
  3. Ensure that your image can properly resolve the “Event Handler URL” entered when installing Scalr
  4. Rebundle and register the image with Eucalyptus

Register Your Eucalyptus Cloud

Scalr works with many cloud providers including AWS. They were able to leverage a good amount of  client code  in order to support Eucalyptus. It would seem that the last time that the Eucalyptus integration was looked at was in the Euca 2.x timeframe so many things that Eucalyptus now has full support for with 3.1+ (EBS backed images for example) are not supported. Other missing functionality is support for keypairs (which can be patched using scripts in Scalr) and all instances are launched in the default group (not sure what the reason for this is).

In order to setup Scalr with your cloud credentials:

  1. In the top left of the navigation bar click “default”, the “Manage”
  2. Click “Actions” on the right, then “Configure”
  3. Set your timezone
  4. Click the Eucalyptus logo
  5. Click the green plus sign to add a new Eucalyptus cloud

sclar-add-cloud

Creating a Role

Scalr uses a paradigm for cloud automation that requires that cloud images be registered as Roles. These roles are then added to Farms in order to deploy an app. Each role can only appear once per Farm. Scalr allows you to catalog, version and deploy scripts using a templating mechanism where values can be set at the Role, Farm, or individual server level.

Some common role types are:

  1. Base Images
  2. Load Balancers
  3. Web Servers

In order to create a Role:

  1. Go to this page, replacing the hostname if necessary: http://scalr.local/#/roles/edit
  2. Once there, enter a name for the role, for example: Ubuntu Precise Base
  3. Click on the check box that represents what category of role this is, in our case we will check the checkbox for Base
  4. Click over to the Images tab and enter the information about the image you registered in the steps above
  5. Enter the image info including its machine image ID
  6. Click Add (left side) then Save (bottom center)

Scalr - Roles

Adding your scripts

One of the most powerful parts about Scalr is the ability to write and reuse templated scripts. Scalr also allows you to share, fork, and version your scripts. Scripts can be added to Roles, Farms or individual servers for execution at boot, termination or manually during any part of an instances’ lifecycle. Creating, managing, and deploying scripts allows you to work around some shortcomings of the current Scalr/Eucalyptus integration like only being able to use the default security-group and no keypair being passed to instances launched through Scalr.

  1. Go to http://scalr.local/#/scripts/view in order to add some basic scripts 
  2. Click the green plus sign on the right of the Scalr Web Console
  3. When adding scripts you will need to give them a name and a description along with the actual code
  4. The first script we will add will be called “SSH Key Inject” and ensure that our SSH key can be added to an instance: 
    #!/bin/bash
    echo %ssh_key% > /root/.ssh/authorized_keys
    
  5. The next script we will add will install Graphite, a scalable realtime graphing application and can be found here

You will notice that the script added in step 2 uses a wild card parameter, %ssh_key%, that can be configured at script run time, role provisioning or farm launch time. Once we’ve created our scripts its time to create our farm which will pair our scripts with our images, or Roles, so that we can run and terminate our application at will.

Scalr - Scripts

Creating a Farm

Farms are collections of roles that constitute a single deployment. All servers in a farm can be turned up and down in unison in order to deploy an application. Individual roles within a farm can be autoscaled. When Scalr notices less than a certain threshold of servers in a particular role, it will automatically launch more servers. As an example Farm, I will be showing how to deploy Graphite (Scalable Realtime Graphing).

  1. Add a role using the procedure above that references a Scalarized Precise Base Image 
  2. Click over to the Farms pane in the Scalr web interface
  3. Click the green plus sign on the right of the Scalr Web Console
  4. Enter a name for this farm, in our case: Graphite
  5. Click over to the Roles tab and then click on Roles Library
  6. Click the plus sign next to Base Images
  7. Click on the icon for the Precise Base Image, then click the green plus sign.

Now that we have added the Base Image role to the Farm we will need to add scripts to the Role that run once the instance is up and running.

  1. Click on the icon that has now appeared towards the top of the page in order to configure what scripts we will run on this Role in the Farm
  2. On the bottom left click Scripting. In this configuration page we can choose which scripts to run, in which order and at what times
  3. Under the “When” dropdown click “Host Up”, then under the “execute” drop down choose the “SSH Key Inject” script we added in the steps from above. Once those two options have been chosen, click the green plus sign
  4.   Click on the row that has shown up for that script
  5. On the right side of the pane:
    1. Ensure that “Where” is set to “All instances of this role”
    2. Under the parameters section at the bottom of the pane, set which public key we want to inject for this role
  6. Add our second script “Install Graphite” at “Host Up”, ensure that “Where” is set to “All instances of this role”
  7. Click save at the bottom of the interface

Once we have defined what roles and scripts are paired up to make our application we can launch our Farm.

  1. Go to the Farms interface: http://scalr.local/#/farms
  2. Click Actions on the right side page
  3. Click Launch

Scalr will now launch your image and run the desired scripts to build out the application. You can use this interface to turn up and down your applications as necessary. If Scalr notices that your app is no longer running on the cloud it will be automatically relaunched for you. I have used this feature many times to help me with disaster recovery. Once I have an application running properly in my cloud I ensure that I can terminate any individual instances and my Scalr configuration is setup to properly rebuild the app.

scalr-farms

Standard