Smoke and a Pancake?

Smoke and a Pancake?

Ok. Blog post number two. Let’s start this one by addressing the title. The Austin Powers fans among you will be all too familiar with this line from Goldmember. Admittedly, this tickles me and my mind wanted to come up with something which would grab your attention but also include the word ‘Smoke’.

How many times has somebody asked you to perform a quick smoke test? I’m guessing lot’s. This has a tendency to be a get out of jail for many companies. The deadline is looming. The word on the street is it’s going to be released no matter what – wait a minute – no matter what!? When I hear these sorts of things, my heart sinks. Is there any point in testing something when it’s practically a foregone conclusion this is going to be shipped regardless as to what you uncover? To put this in perspective, then you could argue a massive 7-mile wide comet from space colliding with the Earth could put the mockers on the release. A remote possibility perhaps, but still possible nonetheless. I could still find a Showstopper defect though couldn’t I eh? Well this depends on how much test effort has happened beforehand to some extent, though it’s still entirely possible for a variety of reasons. For example, the test coverage may have been as uneven as the middle lane on the M62 motorway – things could have been missed and there’s a real risk of something being identified at the eleventh hour. Or maybe, there’s been a shed load of testing done previously but the code is in a constant state of flux – so invalidating some of your previous test effort. Shifting sands so to speak. Maybe a late bug fix has introduced another issue, etc.

Anyway, back to that ‘quick smoke test’. I’ve been in situations where it’s left to the individual tester to determine what is tested. In others words, there are no actual smoke tests to execute but rather you are entrusted to ensure what needs to be tested, is then tested. No pressure then. What if my idea of a quick smoke test differs from yours? What if we both performed a quick smoke test against the same build (how quick is quick?) and ended up verifying differents aspects of behaviour and identified different issues. Some far more severe than others. The danger here is that something could still be seen to have passed a ‘quick smoke test’ but still contain defects you care about.

I’m a big fan of even test coverage. I love it. However, it’s something which can easily be forgotten about in the rush to ship as fast as humanly possible. I take great satisfaction in the knowledge that I’ve tested a build in such a way that I’ve covered what needs to be covered (in the time that’s been made available to me) and most importantly, verified the aspects of behavior the whole team cares about. The primary focus over and above assessing risk is ‘value’. If you have a finite amount of time to perform your testing then ensure what you do test is of value. Ensure those common user journeys are behaving as expected. There’s a real temptation for testers to become side tracked with all sorts of weird and wonderful permutations and you end up disappearing down a proverbial rabbit hole trying to come up with steps to reproduce, etc. Yet, your Product Owner may not even care about something which a very small percentage (if any) of the audience will discover. What they will care about are those common (I view as ‘core’) user journeys. I call this the ‘Ronseal’ approach – does it do what it says on the tin?

To instil confidence I’d always recommend your high value ‘smoke tests’ are written down somewhere. Store them in such a way, anyone within the team can access them. Anyone should be able to run them. Make them visible and love them. By love I mean ‘maintain’. Ensure they are actively updated. That they are still meaningful, valid, and which absolutely must pass. One could argue if any one single smoke test case fails, then it warrants a fix (since something you consider high value has a problem). Whether this precludes the current release remains to be seen but at least you have uncovered that defect. It is now a ‘known issue’ and has now appeared on your defect management radar (insert a beep, beep sound effect here).

I guess these are a prime candidate for automation, assuming you have an automated solution in place and perhaps even more importantly, assuming that they can be automated at all. Since we all know you can’t automate everything. That’s another blog post idea right there.

Your high value smoke tests are very important. So important in fact that they are worth their weight in gold. With that in mind I’ll close with another Goldmember quote “You see Austin Powers, I love goooold…”.

 

 

Advertisements

A Bug a Day keeps the PM worried

A Bug a Day keeps the PM worried

Welcome to my first blog post. I’ve been toying with the idea of blogging for a while. Let me start by telling you I’ve been testing for over 10 years. I’ve no idea where that time has gone. I’m obviously older (so greyer) and, whilst I nearly said ‘wiser’, I hesitate to use the word. To an extent, it’s true I suppose but software testing in general is a constant learning curve. Regardless, you start to see trends and commonalities appearing, particularly working across different teams in different parts of an organisation and across different organisations altogether. The same challenges being faced, similar problems, attending retrospectives with an air of déjà vu (something akin to the movie ‘Groundhog Day’), you’d be forgiven for feeling like you’ve seen or heard it all before.

Take defects for example. A dyed-in-the-wool tester will take great comfort in the knowledge they have uncovered something unexpected. That the issue concerned has been identified thanks to their test effort and nobody elses. I love defects for this reason. By surfacing these to stakeholders, pursuing a fix and re-testing these successfully puts an even wider smile on my face. Since, not only have problems been identified before being deployed to the end user, they have also been rectified. You have made the product/system better and the end user is blissfully none the wiser.

Surely everybody in the team shares this view right? Not necessarily. I’ve learned time and again it’s nearly always a balancing act. The word ‘pragmetism’ is used an awful lot in software development circles. A defect which is raised needs to be carefully scrutinised in order to assess whether the defect in question is indeed valid (expected behaviour may have changed and nobody thought to inform the test team or update the requirements, user acceptance criteria, etc). If it is a valid defect, what is the impact on the product, the audience or the business as a whole (usually measured by assigning a severity rating). A member of the team, say, a Project Manager for example would rather ship something as soon as possible (or certainly hit a fixed deadline) and so it is up to the tester to illustrate how severe an issue is, either now or potentially could be in the future. Influencing skills are key. As is the ability to galvanise support from fellow team members e.g. a Product Owner. Something along the lines of risk should be inserted here. Since fixing defects or at least attempting to fix defects often means fiddling with code. What if by making attempts to fix an issue actually results in making matters worse? Think springs popping out. I quite like the analogy of pulling on a piece of cotton (or spaghetti if you will) and this begins to unravel elsewhere (you know what I mean right?). Then (assuming you have persuaded others to entertain the very idea of fixing an issue) comes the usual bun fight over prioritising a fix. Is it a Showstopper? The immortal words from Project Managers, Technical Leads, and Product Owners alike. Does it need to be fixed ahead of the next release or can we afford to revisit and pick this up later?

I guess testing as a whole can be seen by some as an obstacle or some mysterious entity sent from the powers that be to slow down projects and generally throw spanners all over the place. If I had a pound for every time I’ve heard a member of the team say “Oh what have you found now?” or “Don’t find any more issues o.k?”, I’d be able to retire early.

A forward thinking organisation will see the true value of testing. I dare say, the vast majority do (or at least say they do), but saying and doing are very different things of course. To balance this out, I want to mention the fact I’ve worked with some great people. One in particular, was a former Business Analyst who became a Product Owner. This particular individual was genuinely interested in the issues/observations and even recommendations from the test team. They advocated the importance of testing to the rest of the team and how this serves to give rise to successful product launches and updates. The prospect of something going wrong in live (or worse, shortly after being deployed in live) fills me with dread.

Whilst, the whole software development team are responsible for the quality of products/systems they decide to ship, another familiar sounding question such as “Was this tested?” comes to mind. It’s important to recognise that the test team do not arbitrate what goes into live. We’re here to surface information. I like capturing issues just so we have something to reference should we ever need to in future. Maybe you’d like to perform some defect analysis during the post project review for example. Or maybe the wheels do come off in live and when somebody asks if this was ever tested or whether this was a ‘known issue’ ahead of release, then you have something to work against.

Over and above informing about issues found, we’re also empowered to make recommendations. Moreover, a tester may spot gaps in requirements and/or user journeys which haven’t been thought about in the earlier stages of the project lifecycle, etc. The list goes on.

In short, don’t worry, be happy.