Since I decided that I’ll focus less on test automation this year, I’ve found myself thinking more about the other parts of our software development system that I may be able to tinker with and try to improve, like our existing way of doing ‘scrum’ and writing user stories. Data analysis also comes to mind, even if there are limits to what I can do. And the tricky part about data analysis is finding out which information is valuable. Some data are easy to measure but can be trivial, some are difficult to get but can be important. Some information seem useful on the surface but can turn out to be inconsequential in the long run after thinking about it more deeply.
Some ideas:
- measuring the number of bugs and change requests reported and delivered within a fixed span of time, how fast were they solved, and in what functional category do they belong to
- looking at the data of our top (and bottom) performing clients through different lenses, possibly including revenue and used services
- measuring, or reviewing what services are most (and least) popular as a whole
- measuring the percentage of failing and flaky tests in the automated regression test suite, determining which these tests are, and probably building a more public dashboard for sharing test results
- measuring test coverage, and (more importantly) identifying what valuable tests still need to be written
What certain bits of data do you think are valuable?