About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious:

Measuring and Reviewing Possibly Valuable Data

Since I decided that I’ll focus less on test automation this year, I’ve found myself thinking more about the other parts of our software development system that I may be able to tinker with and try to improve, like our existing way of doing ‘scrum’ and writing user stories. Data analysis also comes to mind, even if there are limits to what I can do. And the tricky part about data analysis is finding out which information is valuable. Some data are easy to measure but can be trivial, some are difficult to get but can be important. Some information seem useful on the surface but can turn out to be inconsequential in the long run after thinking about it more deeply.

Some ideas:

  • measuring the number of bugs and change requests reported and delivered within a fixed span of time, how fast were they solved, and in what functional category do they belong to
  • looking at the data of our top (and bottom) performing clients through different lenses, possibly including revenue and used services
  • measuring, or reviewing what services are most (and least) popular as a whole
  • measuring the percentage of failing and flaky tests in the automated regression test suite, determining which these tests are, and probably building a more public dashboard for sharing test results
  • measuring test coverage, and (more importantly) identifying what valuable tests still need to be written

What certain bits of data do you think are valuable?

Tracking Numbers

What’s the use of tracking data (total number of story points per sprint, lates, number of added or returned tickets, number of bug reports per month, how many hours it takes automated tests to finish testing a particular app) if we don’t scrutinize them?

And what’s the use of a data scrutiny if it doesn’t have any bearing on what we do, on things where we want to get better at?

Number crunching and analysis only helps if the data we are tracking and the reasons why we are doing it in the first place are aligned.