Welcome to my first blog post. I’ve been toying with the idea of blogging for a while. Let me start by telling you I’ve been testing for over 10 years. I’ve no idea where that time has gone. I’m obviously older (so greyer) and, whilst I nearly said ‘wiser’, I hesitate to use the word. To an extent, it’s true I suppose but software testing in general is a constant learning curve. Regardless, you start to see trends and commonalities appearing, particularly working across different teams in different parts of an organisation and across different organisations altogether. The same challenges being faced, similar problems, attending retrospectives with an air of déjà vu (something akin to the movie ‘Groundhog Day’), you’d be forgiven for feeling like you’ve seen or heard it all before.
Take defects for example. A dyed-in-the-wool tester will take great comfort in the knowledge they have uncovered something unexpected. That the issue concerned has been identified thanks to their test effort and nobody elses. I love defects for this reason. By surfacing these to stakeholders, pursuing a fix and re-testing these successfully puts an even wider smile on my face. Since, not only have problems been identified before being deployed to the end user, they have also been rectified. You have made the product/system better and the end user is blissfully none the wiser.
Surely everybody in the team shares this view right? Not necessarily. I’ve learned time and again it’s nearly always a balancing act. The word ‘pragmetism’ is used an awful lot in software development circles. A defect which is raised needs to be carefully scrutinised in order to assess whether the defect in question is indeed valid (expected behaviour may have changed and nobody thought to inform the test team or update the requirements, user acceptance criteria, etc). If it is a valid defect, what is the impact on the product, the audience or the business as a whole (usually measured by assigning a severity rating). A member of the team, say, a Project Manager for example would rather ship something as soon as possible (or certainly hit a fixed deadline) and so it is up to the tester to illustrate how severe an issue is, either now or potentially could be in the future. Influencing skills are key. As is the ability to galvanise support from fellow team members e.g. a Product Owner. Something along the lines of risk should be inserted here. Since fixing defects or at least attempting to fix defects often means fiddling with code. What if by making attempts to fix an issue actually results in making matters worse? Think springs popping out. I quite like the analogy of pulling on a piece of cotton (or spaghetti if you will) and this begins to unravel elsewhere (you know what I mean right?). Then (assuming you have persuaded others to entertain the very idea of fixing an issue) comes the usual bun fight over prioritising a fix. Is it a Showstopper? The immortal words from Project Managers, Technical Leads, and Product Owners alike. Does it need to be fixed ahead of the next release or can we afford to revisit and pick this up later?
I guess testing as a whole can be seen by some as an obstacle or some mysterious entity sent from the powers that be to slow down projects and generally throw spanners all over the place. If I had a pound for every time I’ve heard a member of the team say “Oh what have you found now?” or “Don’t find any more issues o.k?”, I’d be able to retire early.
A forward thinking organisation will see the true value of testing. I dare say, the vast majority do (or at least say they do), but saying and doing are very different things of course. To balance this out, I want to mention the fact I’ve worked with some great people. One in particular, was a former Business Analyst who became a Product Owner. This particular individual was genuinely interested in the issues/observations and even recommendations from the test team. They advocated the importance of testing to the rest of the team and how this serves to give rise to successful product launches and updates. The prospect of something going wrong in live (or worse, shortly after being deployed in live) fills me with dread.
Whilst, the whole software development team are responsible for the quality of products/systems they decide to ship, another familiar sounding question such as “Was this tested?” comes to mind. It’s important to recognise that the test team do not arbitrate what goes into live. We’re here to surface information. I like capturing issues just so we have something to reference should we ever need to in future. Maybe you’d like to perform some defect analysis during the post project review for example. Or maybe the wheels do come off in live and when somebody asks if this was ever tested or whether this was a ‘known issue’ ahead of release, then you have something to work against.
Over and above informing about issues found, we’re also empowered to make recommendations. Moreover, a tester may spot gaps in requirements and/or user journeys which haven’t been thought about in the earlier stages of the project lifecycle, etc. The list goes on.
In short, don’t worry, be happy.