Interesting Talks from Google Test Automation Conference 2016

There’s a lot of stuff going on in the software testing community at the moment, specifically in the field of automation, because of how software is now being deployed into various other platforms besides personal computers. Google needs to worry about testing their eyeglasses, virtual reality headsets, and cars. Others care about testing robots and televisions. This is why it is fun to watch talks from conferences, like the Selenium Conference or the recently concluded Google Test Automation Conference: I get to find out what problems they’re facing and see how they try to solve them, and maybe learn a thing or two. Sometimes I get to pick up a new tool to try for my own testing too, a great bonus.

Some favorite talks from the conference are:

Speeding Up a Suite of Web Application Automated End-to-End Checks

Speed is always an issue when running a full suite of automated end-to-end checks for web applications because they are inherently slow. Tests open a browser on a desired machine, navigate to a required application state (and hence often wait for the app to load completely), do an action on the system, validate results, and finally perform cleanup if necessary. A single check runs for some good number of seconds especially if it goes through several phases, and this means that a suite of tests can easily total to an hour or more. It’s easy to add tests. It’s difficult to have the full suite of tests run in under only a few minutes, especially when the only infrastructure available is only a local machine.

Some ideas on speeding up automated end-to-end checks:

  • Bundle related tests together in a group, running on top of each other, the resulting state of one test becomes the requirement of the next test. This is risky. What happens when the first test fails? The next test does not run anymore, even if the test probably passes. But we don’t know that because the test did not run. Ideal automated checks should be independent of each other, should be as atomic as possible.
  • Run tests in parallel. TestNG does this with Java, the parallel_tests gem can do it with Ruby. Jenkins (among other continuous integration tools) can run multiple jobs if fancied. Remember however that running automated checks in parallel is a processor-intensive task and has some tendency to fail tests because of system lag, timing, or application state sharing between tests, especially on low-end machines. Investing in dedicated infrastructure (whether physical computers or software-as-a-service systems on the cloud like SauceLabs) definitely helps, but the decision for that depends on the organization’s financial prioritization.
  • Minimize end-to-end checks as much as possible. This is viable for systems built with a healthy dose of existing unit and component tests within the application stack, usually teams who practice good test-driven development (TDD) or behavior-driven development (BDD) when building software. For a lot of teams without such unit tests to fall back to, reducing the number of working automated end-to-end checks will prove to be difficult.
  • Use headless browsers. I haven’t had much experience running checks on headless browsers but people do use them, often successfully using xvfb on Ubuntu.
  • Leverage REST calls. How does the browser load a page URL? How are users are redirected from one page to another? How are form values submitted properly when a call-to-action button is clicked? Using REST calls, one can log in on a system with the right credentials, retrieve data from where it is displayed, or send data to the application without opening a browser. We can use that for automation, either for simulating a desired application state or for testing responses to manipulated requests whether that’s data or the resulting page. This can help our tests run faster, the only catch is that the tester will have to learn how REST calls work and (more importantly) understand how the web application under test runs under the hood of the browser.