Interesting Talks from Google Test Automation Conference 2016

There’s a lot of stuff going on in the software testing community at the moment, specifically in the field of automation, because of how software is now being deployed into various other platforms besides personal computers. Google needs to worry about testing their eyeglasses, virtual reality headsets, and cars. Others care about testing robots and televisions. This is why it is fun to watch talks from conferences, like the Selenium Conference or the recently concluded Google Test Automation Conference: I get to find out what problems they’re facing and see how they try to solve them, and maybe learn a thing or two. Sometimes I get to pick up a new tool to try for my own testing too, a great bonus.

Some favorite talks from the conference are:

About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious:

Measuring and Reviewing Possibly Valuable Data

Since I decided that I’ll focus less on test automation this year, I’ve found myself thinking more about the other parts of our software development system that I may be able to tinker with and try to improve, like our existing way of doing ‘scrum’ and writing user stories. Data analysis also comes to mind, even if there are limits to what I can do. And the tricky part about data analysis is finding out which information is valuable. Some data are easy to measure but can be trivial, some are difficult to get but can be important. Some information seem useful on the surface but can turn out to be inconsequential in the long run after thinking about it more deeply.

Some ideas:

  • measuring the number of bugs and change requests reported and delivered within a fixed span of time, how fast were they solved, and in what functional category do they belong to
  • looking at the data of our top (and bottom) performing clients through different lenses, possibly including revenue and used services
  • measuring, or reviewing what services are most (and least) popular as a whole
  • measuring the percentage of failing and flaky tests in the automated regression test suite, determining which these tests are, and probably building a more public dashboard for sharing test results
  • measuring test coverage, and (more importantly) identifying what valuable tests still need to be written

What certain bits of data do you think are valuable?