I’m currently reading Brian P. Hogan‘s book ‘Write Better with Vale‘, about Vale (a command line tool to ease follwing style guides in prose text). While intended for prose text (as far as I know), Vale can also process any text file, including code.
When I tried to copy & paste commands from the epub version of the book, into my shell, I got an error message:
> vale .
E100 [NewE201] Runtime error
The path ‘…/vale_example_project/../styles’ does not exist.
Execution stopped with code 1.
After some exploration, I figured that some zero-width-space characters were in the INI file I copied into my example project (and reported it over at the books feedback site).
Then I realised that I could create my own Vale rule to detect these invisible characters in text files.
Now, Vale is configured using an INI file. There you declare where it can find the styles and which rules should be applied for which files (see https://vale.sh/docs/vale-ini for more details). I set up a new rule in a file …/styles/Custom/NoZeroWidthSpaces.yml:
# Styles/Custom/NoZeroWidthSpace.yml
extends: existence
message: "Avoid using zero-width or invisible characters (%s)."
level: error
scope: raw
raw:
- '[\u200B\u200C\u200D\u2060\uFEFF]'
I set up Vale to use this new Custom rule in .vale.ini like this:
StylesPath = styles
[*.md]
BasedOnStyles = Custom
With that, I get a nice message when running vale against a file riddled with those characters::
> vale .
example.md
1:111 error Avoid using zero-width or Custom.NoZeroWidthSpaces
invisible characters ().
Admittedly, the empty looking parentheses aren’t very instructive, since they contain invisible characters. At least it’s possible to copy-&-paste them to find out what they are in other tools.
At Agile Testing Days 2024, I offer a full-full day tutorial, ‘The Disappearance of J. T. Womblegast — A Git Tutorial‘ (on 19. Nov.). As the name suggests, the tutorial is about working with Git. But it’s also a mystery journey about the disappearance of some J.T. Womblegast who strangely left nothing but a Git repository containing hints about this … retreat.
Of course, other topics are possible. If you’d like to take this opportunity and have any questions or suggestions, talk to the conference organisers (or me directly).
Note: I denote this as day zero since the conference starts counting the conference days, with ‘day one’ being the first day after the tutorial day.
As every year, the Agile Testing Days start with a tutorial day. I chose ‘Breaking into AI and Machine Learning’ by Tariq King. The tutorial followed a top-down approach. We did not have to (re-) learn linear algebra and the like before getting started. Instead, after a brief introduction to the topic, we could work on an example task: Classifying irises. A CSV file containing typical attributes of various flower species was used to create a model that could classify a flower as one of the species the model was trained on. Tariq introduced this as the ‘Hello world of AI’.
We saw how overfitting a model can cause issues when a model is used with new data it wasn’t trained on. This happens when the model matches the test data (nearly) perfectly, which usually causes larger misclassifications when new data is put into the model.
We also learned how models can be trained on images to classify them. This is the next step since it requires processing much more data.
Then ChatGPT was introduced. While I am still a bit sceptical about some of its output since it’s known to (for example) ‘hallucinate’ citations for scientific papers. Yet, I am impressed with what can be achieved when it’s provided with enough data and prompts tuned to its needs.
In the first keynote, Maaike Brinkhof wrapped the experiences in her software testing career in the story of a role-playing game. In this setting, she met increasingly hard-to-conquer ‘bosses’. Inspiring, entertaining – and providing input for the following keynotes. In other words, It opened my mind for the conference to come.
Day 1 – No Overnight Sucess & Sociocracy
In the day’s first keynote, Kristel Kruustuk presented her thoughts about ‘10x Software Testing‘. My takeaway was this: You don’t become a ’10× tester’ overnight. Instead, it requires persistence and regular training. This matched nicely with my personal experience and is linked to one of my sessions this year.
The next session I attended was CraigRisi’s ‘Becoming an Open Sourcerer‘. He explained what teams should consider when they use open-source software. He also discussed the advantages and potential disadvantages of using open-source software. Finally, we learned about contributing to open-source projects. While contributing code changes is likely the most common way to contribute, providing documentation is another essential aspect, as are providing and improving bug reports and even (automated) tests.
Consent means “Good enough for now, safe enough to try”
John Buck, Agile Testing Days 2023
As a tester, I’m unsure how to use this to debug management, and I will admit that I haven’t tried it yet.
Day 2 – Workshop & Infotainment
I missed the first keynote of the day since the next scheduled time slot included my workshop ‘Fun with U̡̟ͩ̊̏ͬͯni͑c͐̀͢od̲̎ͅḕ̶̩͙͆‘. Much to my pleasure, it was well attended, and folks were surprised at how bad some software is with processing Unicode. As Maaike tweeted:
Biggest lesson so far at the Unicode workshop: good luck in life with computers if your name contains diacritics #agileTD
After collecting my workshop material and winding down, I attended the keynote ‘Everyone is a Leader‘ by Zuzi Šochová. I liked how easy it was to follow along and the message that everyone can be a leader – at some time, for some topic. Leadership doesn’t have to be assigned but can be assumed temporarily when it makes sense.
The following two keynotes were mindblowing! Dr. Rochelle Carr requested the audience to ‘MOVE THAT WALL‘. This talk was loud and inspiring and made me think about which walls I have that I may want to move – or tear down entirely.
‘Don’t go breaking my code‘, by Lena Nyström & Samuel Nitsche, was a keynote in a musical or rock opera format: Loud, entertaining, and fun. It also explained where and why testers and developers have different points of view. Not only that, they also demonstrated ways to get along with each other better.
I ended the day by spending time at the Agile Testing Days Book Fair, organised by Tobias Geyer and Maik Nogens. Thankfully, I got the books I was looking for: Zuzi Šochvá’s ’The Agile Leader’ and John Buck’s ‘We The People’. They were even kind enough to sign the books for me. Thank you!
Day 3 – Conflict Resolution, Micropowers & Judgment Day
In the morning keynote ‘A Fighting Chance – Learning the Art of Conflict Resolution‘, Alex Schladebeck presented pitfalls to avoid when dealing with conflict and good ways to deal with them. Planned as a pair keynote, the second speaker, Sophie Küster, couldn’t be at the conference. Sophie, you were missed, and we all hope you’re back next year! My key takeaway: Noticing that someone perceives a conflict goes a long way to mitigating it. – Especially if the affected parties know about the pitfalls, such as saying, ‘You always/never do XY’.
After this, Eveline Moolenaars and I prepared our talk ‘Micropowers: Learn to Speak Up and Be Heard‘. This was about our shared experience of recovering from cancer and its treatment and how that helped us to start asking for help – and helping others. We found the term ‘superpower’ intimidating and came up with the term ‘micropower’. We defined this as an ability one can trust that helps to act when we see things that should be changed.
In the morning keynote, ’A Fighting Chance – Learning the Art of Conflict Resolution’, Alex Schladebeck presented pitfalls to avoid when facing conflicts and good ways to deal with them. Planned as a pair keynote, the second speaker, Sophie Küster, couldn’t be at the conference. Sophie, you were missed, and we all hope you’re back next year! My takeaway: Noticing that someone perceives a conflict goes a long way to mitigating it. – Especially if the affected parties know about the pitfalls, such as saying, ‘You always/never do XYZ’.
After this, Eveline Moolenaars and I prepared our talk ‘Micropowers: Learn to Speak Up and Be Heard’. This was about our shared experience of recovering from cancer and its treatment and how that helped us to start asking for help – and helping others. We found the term ‘superpowers’ intimidating and came up with the word ‘micropower’. We defined this as an ability one can trust that helps to act when we see things that should be changed.
The keynote ‘Wait! That’s Not Tested’ by Heather Reid introduced the idea that not all things need to be tested. We need to consider time, cost and risk when testing software. And since there is never enough time to test everything anyway, we must make bets. This connects nicely to John Buck’s definition of consent: ‘Good enough for now, safe enough to try’.
The keynote, ‘The Rise of Generative AI: Judgment Day’ by Tariq King, was the perfect ending to the official program since it nicely connected to my tutorial day. He presented content (paintings and music) in pairs: One an original from a human artis, the other one created by AI is the style of that artist. The audience was tasked to tell which one was the original and which one the ‘copy’. – I found it shocking that we, the audience, did not perform particularly well.
My overall impression of the Agile Testing Days: It was a very well-planned conference, with sessions that connected ideas and concepts. I am already looking forward to Agile Testing Days 2024 – and have many ideas for proposals already.
Thank you to everyone I have met and talked with this year. I hope to see you again in 2024.
Here’s a tip: Write your automated tests with the failure in mind. Especially, consider a future maintainer who may need a useful error message.
This can help when the test fails in the future (and it probably will). A descriptive message helps understand the technical issue you’re looking at and will ideally guide you to finding a solution.
Let’s look at some examples that leave something to be desired. These messages may be true, but don’t help to understand the underlying problem:
Expected true, but got false
The result message is malformed
fail
Yes, I have seen these or very similar messages, that are rather useless.
Imagine how much more helpful, the following messages are:
Expected condition XY to be true in context AB of object O
The message ‘<output the actual message>’, is malformed and cannot be processed further
Got <actual result> instead of <expected_result> when processing XY
These improved messages can guide you, help you remember the context and figure out the underlying issue when the test fails.
I find that this improvement shortens the time spent with failure analysis. It makes my days more productive because I get a message that tells me about the context where thing went wrong.
Do you have similar ideas about how to improve (automated) tests? I’d love to hear about them.
Long long ago, while preparing experiments for my diploma thesis in physics, my tutor taught me to express my expectation of the outcome of experiments before actually running them. I was to not only to express them in my head, but speak them out loudly, may be even make a note.
This helped a lot, when figuring out where my thinking didn’t match the experimental evidence. There was no denying when my expectation differed from the empirical result. Typically, there were two sources for the differences:
My mental model wasn’t good enough to match the result of an experiment.
The experimental setup wasn’t designed well enough to show the effect I was trying to measure.
I find that this is still working well in software development (both the coding part as well as testing). A related article about this is Peter Naurs seminal paper ‘Programming as Theory Building’.
When doing TDD (test driven development), explicitly expressing the outcome before running a test may not always lead to a surprise. In the very beginning, when a test tries to create an object, without having a class definition, it will cause an error message that is easy to predict.
However, when work has progressed a bit, I regularly run into situation where I expect the next new test to fail in a certain way, but when running the test, the actual error message is a surprise to me. These are situations where learning can happen, by figuring out why the actual behaviour does occur, instead of what I predicted.
Near the end, the surprise is caused differently: I’ll write a new test and expect it to fail. – It doesn’t. Again, this is a good reason to explore why exactly I thought the test would fail.
Particularly in a pair (or ensemble) programming setup, the inability to come up with a new failing test is a sign, that the implementation is good enough … for now.
Recently, I did a programming exercise as part of a technical interview, and expressing my thoughts and expectations while working on the code helped me to find a solution. Additionally, the interviewer didn’t only see what I was typing, but could follow my way of thinking. This relates to Naur’s paper mentioned above: The testing and tested code I wrote isn’t everything there is to my mental model of the given problem or its solution. The way of thinking is important too. This is why I find vocally expressing my expectations & thinking while actually doing the work.
What are your preferred way to actually do the work you’re doing? I’d love to hear about it.
Make error messages precise enough to help users, so that they can resolve the problem, or at least enable them to provide meaningful input when talking to support.
The Context
For a project, I was working on an application that stored it’s content as text files. Git was used as a storage backend, so the application could track who made what changes when. – So far so good. Testing this application was done on a specific test environment somewhere on the net.
While testing was possible, it was inconvenient to have to also log in the the logging server so track what exactly was happening, and updating the application itself was not as easy either. Therefore, one day I started to set up the whole thing on my local machine. The setup and configuration was surprisingly easy. We had good documentation for where to put which files. This was the easy part. I could start the application & tell it Git repository contained the application content (those text files mentioned above).
The Problem
The problem occurred, when I tried to actually get the application data:
XYZ cannot access the repository!”
The application’s error message as displayed to the user
That’s not particularly helpful. On order to support failure analysis, an error message shown to a user should help identifying the problem and solving it, or to have a meaningful exchange with support.
I checked the log files and quickly found some related information:
I found this surprising, since I was using the exact same identity files to access the very same Git repository from my IDE & other Git clients on my machine. Had something corrupted my key? If so, what was it?
I started by increasing the logging of the SSH library I’m using, to make sure I was using the exact same key pair in all situations. A good while later, after googling for the error message, talking to developers, and reading some documentation, I found out this: The library being used wasn’t coping well with the encryption algorithm I used when generating my public & private key pair.
While I used ‘ed25519‘, a (relatively) recent algorithm (at the time of writing this), the library (in the version that was used in the application) expected the key to be generated using ‘RSA‘. Some details are also in the post ‘“Invalid privatekey” when using JSch‘ on stackoverflow. The problem is the misleading error message: The key wasn’t invalid at all, since I could access the repository using it with other programs that (apparently) used other libraries. The key type was unknown to the library. That’s similar, but different.
A Better Error Message
Had the library error message been something like Key <key_identifier> not recognised; see documentation for supported key types, identifying and solving the problem would have been much faster and less frustrating. Words matter in error messages, too.
Error Messages Should Help You Resolve the Error
I prefer error messages to be detailed enough to help me identify the real cause of the issue and ideally give a hint at what I can do.
Take widespread error messages for pass word fields for example. They may read like this:
Your password must have at least 12 characters overall 1 uppercase & 1 lowercase character 1 number 1 of these characters: # $ % & – ? = / . , ;
A contrived error message, that’s entirely likely
Yes, there’s a lot to be said about the value and limitations of requirements like these, but at least the message helps a user to set a password that complies with the mentioned rules.
A good error message can help users to identify a problem and probably resolve it as well. They don’t have to contact your support team. Your support team doesn’t have to go though the process of identifying the same problem(s) over and over. This saves time (and probably money) on both sides, yours and the user’s.
Tomcat is installed using Homebrew: brew install tomcat
Java was installed by downloading the package (see link above) and running the installer.
I have set export JAVA_HOME=/Library/…/amazon-corretto-11.jdk/Contents/Home in .zprofile so the right Java version is used.
I’d start Tomcat from the command line and our tests from inside Eclipse.
This worked nicely.
After The Upgrade
The upgrade went smoothly for most of the software I am using: Other IDEs, installations of Ruby, Elixir, databases, REST clients, git, etc. all continued to work nicely.
The tests however failed in an interesting way: While basic REST calls worked (e.g. a GET request to retrieve version info), the tests that were using the actual functionality were receiving a plain “Internal Server Error” from Tomcat and the application logs showed some getContext method that ended up receiving null instead of the expected object.
Running the same tests on the same machine using the way the tests are started in CI still worked well. The difference between running from within the IDE & the command line: The command line starts Tomcat and runs the tests against that (and then stops Tomcat), while the IDE uses the already running Tomcat. — Aha!
The Solution
Tomcat logs several environment variables it’s using when it starts, among them JRE_HOME.
And this environment variable pointed to another Java environment, that came from a different source, had a different Java version, and furthermore was (obviously) incompatible with the Java environment set up in JAVA_HOME.
Pointing JRE_HOME to the same Java environment solved the problem and tests are running just fine again. Phew!