Welcome to my fourth blog post. Blogging is rather cathartic and much like uncorking a bottle of wine, ideas of what to write about are simply pouring out of me. So, yes, wine and with that another cryptic title. The wine connoisseurs among you will undoubtedly know what a ‘Sommelier’ is already. For those that don’t then without going into too much detail, this is somebody who is like an uber wine waiter. Think fine dining. Think very specialist. But how does this relate to software testing I hear you cry.
Testers, Test Analysts, Test Engineers (no Testes references please), etc are specialist roles too. I’m going to deliberately circumvent ‘QA’ as this is something entirely different to software testing. Yes, we are special aren’t we. We need to be across pretty much everything that is going on within project(s). For example, we need to be across requirements; features; user journeys; use cases; expected behaviour; user acceptance criteria; user interface; navigation; layout; compatibility; the order in which these are going to be built; when something is ready for testing; how it is going to be tested (to include identifying any pre-requisites); dependencies; planning; estimation; execution; defect management; reporting, etc. The list goes on.
Much like being in a restaurant (not so much a Wetherspoons however), you may hear somebody ask the waiter “What do you recommend?” when pretending to know the difference between a bottle of Cabernet Sauvignon and Cabernet Severny. This is a particularly relevant question and is something which we as testers need to be able to answer as and when appropriate.
The key point I want to make with this blog post is to understand the test team do not and should not arbitrate what version of software is released. That’s not what we are here for. A common misconception. One of the test teams many responsibilities is surfacing information. Not only to just blindly surface, but to surface this information to the right people and at the right time. Going further, the tester will need to tailor their communication style/language depending on to whom they are reporting into. Whilst any good tester will revel in detail, it is important to be able to disseminate the right level of detail to the right audience. Then the timing of your updates needs to be considered. Invariably, stakeholders will want to know about high severity issues as soon as possible. So any Showstoppers you find or for that matter, any Blockers you encounter, need to be fed back as soon as possible. A high severity issue might warrant a fix right there and then, in which case further testing is deemed unnecessary or impossible until the next build is made available since you wouldn’t want to invalidate any precious test effort (at least no more than necessary – sometimes it is unavoidable). Or the flip side would be to surface the high severity issue asap so it is on the teams radar and the developers can start to identify what may have gone wrong in parallel to you continuing with your test execution against a consistent build version. Since, who knows, you may find more high severity issues which also require fixing. In which case, it’d be sensible to minimise the number of test builds or release candidates being created by addressing multiple fixes at once. Think killing two birds with one stone. There’s a real danger sometimes of falling into a vicious circle of never-ending release candidates being sent back into test because you are stop/starting all the time. Whilst there is a tendency to knee-jerk and fix a bug at the drop of a hat with the developer saying “Oh by the way here is another release candidate for you”, this can sometimes be counter-productive in the long run. Builds need to be carefully managed in such a way, the test team can gauge perceived levels of risk and factor this in when determining the scope of regression testing (assuming the latest fix or fixes are retested successfully of course).
Over and above surfacing information, the test team should be empowered to make recommendations. These could be recommendations formed on the basis of their test effort or indeed, from personal experience, or both. I know from my experience, that open issues might not have been fully understood by others in the team or what the downstream impact these may have on the audience. It could be the frequency of something happening (e.g. is it 100% reproducible?), which sways opinion on whether to release or possibly the ease of discovery itself (e.g. does the bug happen by following a common user journey or is it more of an edge case perhaps?). In the frenzy to close issues off, a good tester will need to be able to convey these considerations.
I tend to become emotionally attached to a product. I want this to be the best product as it can possibly be. I want the release to go smoothly and to rapturous applause from stakeholders and the audience alike. So there have been times where I have recommended a certain feature to be implemented or that we need to change the colour of something as trivial as say, a progress bar, to be consistent with the other progress bars within the product. All to make it better. Sometimes, I’ve had to persevere and at times be tenacious about something.
So there will be times where your test recommendations are actively sought from others and there will be times where you will feel compelled to make recommendations whether it is requested or not. More often than not it has been greeted with the immortal words “Oh yeah, good point, we hadn’t thought of that Steve”. The test team need to be aware of the big picture and bring this to the fore in team discussions.
I’m off to find a corkscrew and a large glass. Cheers!