Tester or Sommelier?

Tester or Sommelier?

Welcome to my fourth blog post. Blogging is rather cathartic and much like uncorking a bottle of wine, ideas of what to write about are simply pouring out of me. So, yes, wine and with that another cryptic title. The wine connoisseurs among you will undoubtedly know what a ‘Sommelier’ is already. For those that don’t then without going into too much detail, this is somebody who is like an uber wine waiter. Think fine dining. Think very specialist. But how does this relate to software testing I hear you cry.

Testers, Test Analysts, Test Engineers (no Testes references please), etc are specialist roles too. I’m going to deliberately circumvent ‘QA’ as this is something entirely different to software testing. Yes, we are special aren’t we. We need to be across pretty much everything that is going on within project(s). For example, we need to be across requirements; features; user journeys; use cases; expected behaviour; user acceptance criteria; user interface; navigation; layout; compatibility; the order in which these are going to be built; when something is ready for testing; how it is going to be tested (to include identifying any pre-requisites); dependencies; planning; estimation; execution; defect management; reporting, etc. The list goes on.

Much like being in a restaurant (not so much a Wetherspoons however), you may hear somebody ask the waiter “What do you recommend?” when pretending to know the difference between a bottle of Cabernet Sauvignon and Cabernet Severny. This is a particularly relevant question and is something which we as testers need to be able to answer as and when appropriate.

The key point I want to make with this blog post is to understand the test team do not and should not arbitrate what version of software is released. That’s not what we are here for. A common misconception. One of the test teams many responsibilities is surfacing information. Not only to just blindly surface, but to surface this information to the right people and at the right time. Going further, the tester will need to tailor their communication style/language depending on to whom they are reporting into. Whilst any good tester will revel in detail, it is important to be able to disseminate the right level of detail to the right audience. Then the timing of your updates needs to be considered. Invariably, stakeholders will want to know about high severity issues as soon as possible. So any Showstoppers you find or for that matter, any Blockers you encounter, need to be fed back as soon as possible. A high severity issue might warrant a fix right there and then, in which case further testing is deemed unnecessary or impossible until the next build is made available since you wouldn’t want to invalidate any precious test effort (at least no more than necessary – sometimes it is unavoidable). Or the flip side would be to surface the high severity issue asap so it is on the teams radar and the developers can start to identify what may have gone wrong in parallel to you continuing with your test execution against a consistent build version. Since, who knows, you may find more high severity issues which also require fixing. In which case, it’d be sensible to minimise the number of test builds or release candidates being created by addressing multiple fixes at once. Think killing two birds with one stone. There’s a real danger sometimes of falling into a vicious circle of never-ending release candidates being sent back into test because you are stop/starting all the time. Whilst there is a tendency to knee-jerk and fix a bug at the drop of a hat with the developer saying “Oh by the way here is another release candidate for you”, this can sometimes be counter-productive in the long run. Builds need to be carefully managed in such a way, the test team can gauge perceived levels of risk and factor this in when determining the scope of regression testing (assuming the latest fix or fixes are retested successfully of course).

Over and above surfacing information, the test team should be empowered to make recommendations. These could be recommendations formed on the basis of their test effort or indeed, from personal experience, or both. I know from my experience, that open issues might not have been fully understood by others in the team or what the downstream impact these may have on the audience. It could be the frequency of something happening (e.g. is it 100% reproducible?), which sways opinion on whether to release or possibly the ease of discovery itself (e.g. does the bug happen by following a common user journey or is it more of an edge case perhaps?). In the frenzy to close issues off, a good tester will need to be able to convey these considerations.

I tend to become emotionally attached to a product. I want this to be the best product as it can possibly be. I want the release to go smoothly and to rapturous applause from stakeholders and the audience alike. So there have been times where I have recommended a certain feature to be implemented or that we need to change the colour of something as trivial as say, a progress bar, to be consistent with the other progress bars within the product. All to make it better. Sometimes, I’ve had to persevere and at times be tenacious about something.

So there will be times where your test recommendations are actively sought from others and there will be times where you will feel compelled to make recommendations whether it is requested or not. More often than not it has been greeted with the immortal words “Oh yeah, good point, we hadn’t thought of that Steve”. The test team need to be aware of the big picture and bring this to the fore in team discussions.

I’m off to find a corkscrew and a large glass. Cheers!

Exterminate! Automate!

Exterminate! Automate!

Hello again. Welcome to my third blog post. I’d like to muse about test automation. Yep, that old chestnut. I’ll be honest and say I fell into the world of testing. At the outset I was given a choice, a proverbial fork in the road, as to whether I wanted to pursue a career as a developer or as a tester. I didn’t deliberate for very long. Though I have a curiosity about programming and understanding how things work, my passion for testing and general aptitude for breaking things far outweighed any possibility of becoming a developer. I recall my dad telling me as a kid “Steven. You could break an iron ball.” implying I broke the sturdiest of toys with considerable ease and/or had a natural propensity for identifying problems. The stage, as they say, was set. Little did I know, I’d be using such skills as a part of my future career in software testing.

Oh yes, the title. I need to explain that one. By now you’ll have realised I like to use attention grabbing titles for my blog posts. This one is for all you Doctor Who fans out there. I often hear cries to automate something. Sometimes this can almost feel incessant, much like the motorised dustbins you see chasing the good Doctor and his faithful companion. Not even a set of stairs can fool them nowadays. That’s progress for you. Well, with the onset of continuous delivery, the cries for automation seem even louder than usual. You’ll also be familiar with the usual bun fight over how long you’ll need to undertake regression testing and vocal members of the team saying “Can’t we just automate this and save time?” with their eyebrows at 45 degrees. We manual testers are soooo slow aren’t we. There’s been times when no sooner have I started test execution, I’ve been asked whether I have finished yet. Face palm. You either want the confidence to know your software is behaving as expected before releasing to your audience or you don’t.

I’ll try not to rant here, but ‘Testing’ is a discipline like any other. Take ‘Programming’ for example (or ‘Coding’ or whatever else you wish to call it) as another discipline. Or ‘Business Analysis’ as another. I’ve rarely witnessed anybody pressurising for a developer to finish coding something or for a BA to finish writing user acceptance criteria in double quick time. Yet us testers sometimes get a raw deal. Whether following a waterfall methodology or not, we’re usually the penultimate ones who need to look at something before a stakeholder presses the big red button to deploy into the live environment. Typically, it can feel like the stakeholder is looking over your shoulder tapping their watch in an ever-so-unsubtle way (and coughing at the same time as muttering the words “let’s ship it”). We often feel the squeeze.

So yes, we want to release more frequently. Yes, we wish to develop and ship software like a well oiled machine. Yes, we want the audience to benefit from new and exciting features as soon as possible. We all do. Everybody in a software development team should all share these goals. Some of this is easier said than done however. Automation is not for the faint hearted. Another cliche for you would be to say ‘fools rush in’. Your team needs to give this a lot of thought. Think about what you say?

I’ll cut to the chase and say you are more than likely going to need both manual and automated test effort. Well, you can’t automate everything. Just as you can’t realistically test everything either. You could certainly try but you’ll soon come to realise that the effort employed far exceeds the value in trying to automate certain tests. Then there’s the whole human element to consider. As I explained earlier, some testers are naturally pre-disposed to identifying problems and breaking things. You can’t automate that. Machines (Daleks?) are great aren’t they for some things but they certainly have their flaws. You can’t automate experience, instinct, intuition, etc. These traits are what help testers identify defects a machine couldn’t possibly uncover. If I had a penny for every time I demonstrated a defect to somebody and be asked “how on earth did you do that?”. Writing scenarios, coming up with acceptance criteria, etc is great but there will be user journeys that haven’t necessarily been considered upfront. Then there’s the subtle nuances which affect behaviour across different platforms and devices. Something may work just fine using one browser/device but be completely screwed on another – would your automated tests always capture these instances or possibly give you a bumsteer if everything is showing as passing I wonder? Oh look everything is green. So false positives are also something to think about. What else is there to think about then? How about setup and maintenance. This is a real doozy. You’ll need to carefully consider which automated solution you opt for. Not only for expense in the conventional sense, but the cost of maintaining this for the forseeable. Automated tests are invariably brittle (particularly when testing at the UI level). Are you going to use live data or canned data (this can also lead to false positives)? What if something somewhere changes with or without your knowledge and breaks your valuable tests? Who is going to pick things up when they fall over? What if there are only a chosen few in the office who really understand how it all hangs together and they aren’t available? I mean they could have been taken ill or have left the company altogether – taking that precious knowledge with them.

Don’t get me wrong here. There is absolutely a place for test automation. There are benefits to be had for sure. Imagine having those mind numbingly tedious tasks removed by an automated solution freeing your time up to manually test the more complex stuff (the good stuff). In my experience, you need to weigh everything up and agree with your team what you definitely need to automate and what you definitely do not need to automate. The caveat here would also be to say it’d be wise to identify a middle ground as well. So maybe there are tests you’d like to automate but at a later date maybe. Start with the high value, straight-forward to automate type tests. The ones which are going to serve you by being run repeatedly against each new build version. My previous blog talked about high value smoke tests and these would be a good place to start.

Whether automated or manual. The whole point of testing is to find defects and exterminate them! If you’ll excuse me I need to find somewhere to park my TARDIS (I’m actually heading out for some lunch).

 

Smoke and a Pancake?

Smoke and a Pancake?

Ok. Blog post number two. Let’s start this one by addressing the title. The Austin Powers fans among you will be all too familiar with this line from Goldmember. Admittedly, this tickles me and my mind wanted to come up with something which would grab your attention but also include the word ‘Smoke’.

How many times has somebody asked you to perform a quick smoke test? I’m guessing lot’s. This has a tendency to be a get out of jail for many companies. The deadline is looming. The word on the street is it’s going to be released no matter what – wait a minute – no matter what!? When I hear these sorts of things, my heart sinks. Is there any point in testing something when it’s practically a foregone conclusion this is going to be shipped regardless as to what you uncover? To put this in perspective, then you could argue a massive 7-mile wide comet from space colliding with the Earth could put the mockers on the release. A remote possibility perhaps, but still possible nonetheless. I could still find a Showstopper defect though couldn’t I eh? Well this depends on how much test effort has happened beforehand to some extent, though it’s still entirely possible for a variety of reasons. For example, the test coverage may have been as uneven as the middle lane on the M62 motorway – things could have been missed and there’s a real risk of something being identified at the eleventh hour. Or maybe, there’s been a shed load of testing done previously but the code is in a constant state of flux – so invalidating some of your previous test effort. Shifting sands so to speak. Maybe a late bug fix has introduced another issue, etc.

Anyway, back to that ‘quick smoke test’. I’ve been in situations where it’s left to the individual tester to determine what is tested. In others words, there are no actual smoke tests to execute but rather you are entrusted to ensure what needs to be tested, is then tested. No pressure then. What if my idea of a quick smoke test differs from yours? What if we both performed a quick smoke test against the same build (how quick is quick?) and ended up verifying differents aspects of behaviour and identified different issues. Some far more severe than others. The danger here is that something could still be seen to have passed a ‘quick smoke test’ but still contain defects you care about.

I’m a big fan of even test coverage. I love it. However, it’s something which can easily be forgotten about in the rush to ship as fast as humanly possible. I take great satisfaction in the knowledge that I’ve tested a build in such a way that I’ve covered what needs to be covered (in the time that’s been made available to me) and most importantly, verified the aspects of behavior the whole team cares about. The primary focus over and above assessing risk is ‘value’. If you have a finite amount of time to perform your testing then ensure what you do test is of value. Ensure those common user journeys are behaving as expected. There’s a real temptation for testers to become side tracked with all sorts of weird and wonderful permutations and you end up disappearing down a proverbial rabbit hole trying to come up with steps to reproduce, etc. Yet, your Product Owner may not even care about something which a very small percentage (if any) of the audience will discover. What they will care about are those common (I view as ‘core’) user journeys. I call this the ‘Ronseal’ approach – does it do what it says on the tin?

To instil confidence I’d always recommend your high value ‘smoke tests’ are written down somewhere. Store them in such a way, anyone within the team can access them. Anyone should be able to run them. Make them visible and love them. By love I mean ‘maintain’. Ensure they are actively updated. That they are still meaningful, valid, and which absolutely must pass. One could argue if any one single smoke test case fails, then it warrants a fix (since something you consider high value has a problem). Whether this precludes the current release remains to be seen but at least you have uncovered that defect. It is now a ‘known issue’ and has now appeared on your defect management radar (insert a beep, beep sound effect here).

I guess these are a prime candidate for automation, assuming you have an automated solution in place and perhaps even more importantly, assuming that they can be automated at all. Since we all know you can’t automate everything. That’s another blog post idea right there.

Your high value smoke tests are very important. So important in fact that they are worth their weight in gold. With that in mind I’ll close with another Goldmember quote “You see Austin Powers, I love goooold…”.

 

 

A Bug a Day keeps the PM worried

A Bug a Day keeps the PM worried

Welcome to my first blog post. I’ve been toying with the idea of blogging for a while. Let me start by telling you I’ve been testing for over 10 years. I’ve no idea where that time has gone. I’m obviously older (so greyer) and, whilst I nearly said ‘wiser’, I hesitate to use the word. To an extent, it’s true I suppose but software testing in general is a constant learning curve. Regardless, you start to see trends and commonalities appearing, particularly working across different teams in different parts of an organisation and across different organisations altogether. The same challenges being faced, similar problems, attending retrospectives with an air of déjà vu (something akin to the movie ‘Groundhog Day’), you’d be forgiven for feeling like you’ve seen or heard it all before.

Take defects for example. A dyed-in-the-wool tester will take great comfort in the knowledge they have uncovered something unexpected. That the issue concerned has been identified thanks to their test effort and nobody elses. I love defects for this reason. By surfacing these to stakeholders, pursuing a fix and re-testing these successfully puts an even wider smile on my face. Since, not only have problems been identified before being deployed to the end user, they have also been rectified. You have made the product/system better and the end user is blissfully none the wiser.

Surely everybody in the team shares this view right? Not necessarily. I’ve learned time and again it’s nearly always a balancing act. The word ‘pragmetism’ is used an awful lot in software development circles. A defect which is raised needs to be carefully scrutinised in order to assess whether the defect in question is indeed valid (expected behaviour may have changed and nobody thought to inform the test team or update the requirements, user acceptance criteria, etc). If it is a valid defect, what is the impact on the product, the audience or the business as a whole (usually measured by assigning a severity rating). A member of the team, say, a Project Manager for example would rather ship something as soon as possible (or certainly hit a fixed deadline) and so it is up to the tester to illustrate how severe an issue is, either now or potentially could be in the future. Influencing skills are key. As is the ability to galvanise support from fellow team members e.g. a Product Owner. Something along the lines of risk should be inserted here. Since fixing defects or at least attempting to fix defects often means fiddling with code. What if by making attempts to fix an issue actually results in making matters worse? Think springs popping out. I quite like the analogy of pulling on a piece of cotton (or spaghetti if you will) and this begins to unravel elsewhere (you know what I mean right?). Then (assuming you have persuaded others to entertain the very idea of fixing an issue) comes the usual bun fight over prioritising a fix. Is it a Showstopper? The immortal words from Project Managers, Technical Leads, and Product Owners alike. Does it need to be fixed ahead of the next release or can we afford to revisit and pick this up later?

I guess testing as a whole can be seen by some as an obstacle or some mysterious entity sent from the powers that be to slow down projects and generally throw spanners all over the place. If I had a pound for every time I’ve heard a member of the team say “Oh what have you found now?” or “Don’t find any more issues o.k?”, I’d be able to retire early.

A forward thinking organisation will see the true value of testing. I dare say, the vast majority do (or at least say they do), but saying and doing are very different things of course. To balance this out, I want to mention the fact I’ve worked with some great people. One in particular, was a former Business Analyst who became a Product Owner. This particular individual was genuinely interested in the issues/observations and even recommendations from the test team. They advocated the importance of testing to the rest of the team and how this serves to give rise to successful product launches and updates. The prospect of something going wrong in live (or worse, shortly after being deployed in live) fills me with dread.

Whilst, the whole software development team are responsible for the quality of products/systems they decide to ship, another familiar sounding question such as “Was this tested?” comes to mind. It’s important to recognise that the test team do not arbitrate what goes into live. We’re here to surface information. I like capturing issues just so we have something to reference should we ever need to in future. Maybe you’d like to perform some defect analysis during the post project review for example. Or maybe the wheels do come off in live and when somebody asks if this was ever tested or whether this was a ‘known issue’ ahead of release, then you have something to work against.

Over and above informing about issues found, we’re also empowered to make recommendations. Moreover, a tester may spot gaps in requirements and/or user journeys which haven’t been thought about in the earlier stages of the project lifecycle, etc. The list goes on.

In short, don’t worry, be happy.