Back to the Bug

Back to the Bug

I’m going for as many blog posts as I can muster whilst I have some time to spare. My idea for this post is nothing new but it’s one I can relate to. I’d love to have been Marty McFly (or maybe Biff Tannen sometimes as he had a cool car) but testing and time travel can share some commonalities. For example, have you ever felt like you’ve had the same conversation amongst your team or maybe the same conversation across different software development teams? Or perhaps you’ve seen a defect before or the symptoms are scarily similar to one you have encountered before – can you remember what the solution was? What was the cause? Did you spot this again by sheer coincidence or have you put preventative measures in place to ensure a particularly nasty looking bug would be identified should it ever come back to life?

Yes, that’s right ladies and gerbils, welcome to the world of ‘Defect Prevention’. As much as we like seeing defects fixed and patting one another on the back for a job well done after release, have you taken some time to reflect on how these came to..ahem..pass (or fail would be more appropriate in this context). It’s not a witch hunt though. Nobody should be pointing fingers here. This is about discovering what led to the error being introduced and seeing if there are ways and means of mitigating such circumstances reoccurring in future.

Maybe it was human error. Somebody somewhere screwed up. Yep, it happens. Maybe it was genuinely something unforseen. Again, this happens. However, it is important to try to learn from these situations as best you can. For example, it could have been the case that you’d bitten off more than you can chew when sprint planning perhaps? Maybe you hadn’t accounted for resource being spread so thinly due to ‘noise’ and what little resource you had available were coding away like headless chickens and mistakes were made or there hadn’t been time to code review ahead of test. So perhaps think about ways of shielding your resource from such ‘noise’ and put measures in place to ensure time is allotted for code reviews taking place. Or you need to more closely assess levels of risk in future when tampering with the code base, particularly when there’s been a lot of refactoring going on in parallel to other ‘changes’. Maybe stripping out oodles of code mid-sprint wasn’t such a good idea after all and/or implementing a shed load of significant new features in one go wasn’t the best approach. Could you have done this at a more suitable time or at a more manageable velocity? Sometimes unforseen errors can happen when there are changes going on elsewhere (within the mystical back-end or higher up in the snowy regions of the stack) that you had not been made aware of. So communication problems (or just a general lack of it) can give rise to wheels coming off your product or system. Could you perhaps better manage or certainly look to improve this communication with your external dependencies in future? The list goes on..and on..and on.

I’m a big fan of continuous improvement. No, not because this often overused and underrated term just sounds good either. It’s just common sense. Retrospectives are one of the perfect times for development teams to come together and share meaningful feedback. No mud-slinging. No getting carried away with the jumbo pens and colourful post-it notes. Simply focusing on learning from your previous experiences, good and bad, is just sensible. Have those conversations with one another in a positive (hey, we’re all just trying to do a job here and work as part of a team) kind of way. This is much more likely to affect positive change. The flip side of this is to unwittingly encounter the same issues again and again. Around and around we go.

Back to the Bug though. That is the title of my blog post I guess. Say you’ve come across a real doozy. It turns out to make quite the impact. It’s a real big deal. One the whole team is going to have to deal with. Heads are being scratched but there is light at the end of the tunnel. Let’s try to make sure this sort of thing doesn’t bite us again in future should be something you and the rest of the team are thinking. Should this way of thinking only apply to the Showstoppers though? Well, I would like to think once you get into the habit of not re-inventing the wheel every time and not feeling like you are fire fighting to get across the line, it should become second nature. Often, something fairly quick and easy is all that is needed to save you having to suffer the same fate again.

I’m a realist though. You’re never going to get it right every time. Nor can you hand on heart say something won’t happen again. However, if it does and you have the solution readily available, then maybe it won’t bite you quite as hard this time around.

Remember that doozy of a bug we spoke about. Supposing that was only spotted at the eleventh hour. Could you maybe run a test to try to capture this particular problem earlier in future? You can try I guess. At least you’ve tried. Yoda would say “do or do not, there is no try” but we’re not going to mix Star Wars metaphors with Back to the Future are we. Scared? You will be..you will be.

Advertisements

A Bug’s Life

A Bug’s Life

My seventh blog post! I don’t think I’ve written about bugs enough. In my inaugural blog post I touched upon how these can generally be perceived but not about what makes a good bug report (if there is such a thing) and the typical life cycle a bug report finds itself subjected to once it has been raised.

So first things last, what is a bug? What is a defect? What is an issue? In my experience, terminology differs across teams and even across different organisations. Oh yes we could babble on about definitions but if you are going to stick with your teams common vernacular, is there any point?

The get out of jail is well why not use whatever language works best for the team. Hmm. There’s a part of me that wants to champion best practise and as politely and as carefully as possible avoid bruising egos by subtly influencing (dare I even say, educating) others in the correct use of terminology. Let’s not get too side tracked here. However, remember you are the tester. You are the one who has been on the training courses, read the books, and got the t-shirt. A lot of people outside of the test team ‘think’ they know about testing or understand everything there is to know about testing but are often quite misinformed. They could have been a PM for donkeys years and think they know it all – not always. I recall a developer once exclaiming that they could do a testers job. Oh really. The irony was they might as well have tried since their coding abilities left a lot to be desired (think lot’s of bugs). Anyway I digress.

A defect, simply put, is a problem which hinders a particular aspect of software to perform a particular function. A defect can be considered as something which deviates from the expected result and/or the original requirements. I find that some teams would rather use the word ‘bug’ than ‘defect’. Or that they are used interchangeably. Strictly speaking they are two different things. Defects can be caused by a variety of things. For example, a defect could be caused by a mistake within the code. Such mistakes are referred to as ‘errors’ or ‘bugs’. Or perhaps a defect has been caused by an ‘error’ in the design documentation. Whatever you call them, nobody should be looking to apportion blame. Though, some developers can be rather defensive when a defect is raised. Afterall, we are only human. Though it’s worth remembering we are also professionals and I like to think every facet of a software development team is working together towards a common goal. In contrast to the defensive developers, I’ve had the pleasure of working with developers who are delighted the test team have spotted something which requires a fix before release. The upshot being the release stands a better chance of being a success and we all end up basking in the glory bestowed upon us from stakeholders. One team. A cliché maybe, but it’s so true.

So something is not as it should be. What do you do? Well, as Bob Hoskins once said in an old B.T commercial “it’s good to talk” so providing there’s somebody around to talk to about what you’ve observed, think about mentioning this to the developer who worked on that particular feature. In the interests of a balanced argument, this may not always be practical let alone possible. Since, again, in my experience I’ve been in situations where you try to demo something or speak to a developer about something you’ve seen and they bite your head off as they’re in the middle of something or are heading off to a meeting shortly. Or they see you coming and they hide under their desk. No matter how hard you try to collaborate or try to talk things through, there will be instances where you’ll need to record something you’ve seen somewhere. You can’t remember everything in your head (remember we’re only human) and so making a note of this in a notepad, on a whiteboard or within your defect management system (a.k.a bug tracking tool) is inevitable. Less we forget, having these recorded will pay dividends should you need to reference these again in future or rely upon these when something goes wrong in live and you have evidence to show this was identified during testing but was not fixed ahead of release.

What sort of information do you need to capture in your defect/bug report? Well I always start with a meaningful/descriptive title. Something concise if possible. In terms of contents then think about including relevant information such as the following (where appropriate), which build version you are logging the defect against, what piece of hardware were you using; using what operating system + which version; within which environment; using what data; what steps did you follow; think about including network connectivity information (e.g. Wi-Fi, Cellular, Broadband, Offline, etc); what is the expected result; what is the actual result; indicate whether it is reproducible or not; if it is not 100% reproducible then how frequent is this happening; whether the issue is currently affecting the live environment or not; screenshots; crash logs; video; any recommendations you may have; links to related defects; etc. Listen to developers and try to provide as much information as possible to help identify a cause and fathom how to fix the defect. Depending on your weapon of choice you may be able to force certain information to be captured using mandatory fields.

I need to mention priority and severity. It is a good idea to rate defects for both of these respectively. However, I would argue that it should be the tester who indicates how severe the defect is and priority should be agreed with those responsible for the product or system. It could well be the case that a defect is not going to be prioritised for fixing at all. Which is fine (although at times disappointing) but at least you have done your job by informing what can happen. If the powers that be decide not to spend time and effort fixing the defect then so be it. Sometimes you just have to suck it up. At least you have a record of it though right? Right. By testing and adequately reporting your results, stakeholders can make informed judgements and decisions. Priority levels can typically be indicated on a numerical scale. For example a P1 would be the highest priority and maybe a P4 is the lowest in some organisations. So using this as an example, a P1 would be considered a ‘Showstopper’, a P2 would be considered a ‘High’, a P3 would be a ‘Normal’, and a P4 would be a ‘Low’ priority defect. Then you’d have another scale for severity levels (you get the idea).

So you have a bunch of open defects. Now what? Well, you’re already actively communicating these to the team and have these visible on your team whiteboard or within your defect management system (a.k.a bug tracking tool). Great. However, as another old saying goes, ‘you can take a horse to water but you can’t make it drink’. You need to be ensuring open defects are being ‘managed’. There is sometimes a danger of allowing these to pile up and I personally like to keep on top of defects before they start to get on top of you. One such method is by scheduling a ‘defect/bug triage’ session. These don’t have to occur on a daily basis but use your judgement and get people together as and when you feel is necessary. Maybe have representatives from different areas of the team present e.g. Product, Dev, UX&D. Start by ordering your open defects in severity order and go through each one (which does not already have a priority first and foremost) and strive to gain agreements as to whether these should remain open or not. If they are to remain open, assign a priority and agree what are the next steps. Maybe something needs testing further to aid decision making or maybe there is enough detail for a developer to investigate and hopefully go away and fix, etc.

Now you should have a distilled set of open defects. They’ve been prioritised. You can smell the fixes in the air. Subsequently, the defects which have now been fixed should have their status set to reflect they’re ready for retest. In an ideal world, the fix will be successful. Your retest has passed and the defect can now be closed. Often, defects will not always pass retest the first time round. They get reopened and the developer needs to take another look. There has been times when despite a concerted effort to fix a defect, the problem refuses to go away. This would be a perfect time for another triage session. Explore alternatives for example, maybe there is a workaround for the user or maybe a minor design change could render the problem null and void. Or perhaps, the effort to fix something far outweighs the value, and so you may end up agreeing to mark something as a ‘Will Not Fix’. This happens from time to time. At least you (or rather the team) tried eh?

Defects can be valid and sometimes they can be invalid. Maybe the tester has misunderstood the expected result or maybe the requirements have changed but nobody thought to tell the tester who raised the defect. This can happen. In which case these defects are marked as ‘Invalid’. No big deal.

So a defect can be open, it can be invalid, it can be fixed and ready to retest, it can be closed, it can be reopened, or it can simply be a will not fix.

What about blocking defects I hear you cry. There can be defects which prevent any further testing activity taking place and are then considered to be blocking the test team. You may also find you are blocked on being able to fix a defect and have to wait for something else e.g. a third-party component being updated before it can be retested.

How old is the defect? I’ve seen instances of defect backlogs being allowed to accumulate lot’s of ageing open defects. This bothers my OCD somewhat. I like things to be kept clean and tidy. I would argue anything older than, say, 6 months (just a ball park figure) should be closed off since you need to ask yourself whether you’re ever going to realistically fix them.

I think of defects as a form of currency. Granted, I probably wouldn’t be able to buy anything with them but they are valuable nonetheless. By surfacing these, informing others that they exist, by fixing them and slowly but surely building confidence in your product or system – can you really put a price on that?

 

 

The Old Regression Two Step

The Old Regression Two Step

Welcome to my sixth blog post. If you’ve got this far and have been reading my previous blogs with intent, I want to thank you for sticking with me.

Slow, slow, quick, quick, slow. Which for those folk dancers among us will be all too familiar with. Not to be confused with the hardcore dance variant you understand.

Testers, or better still a folk dancing tester will have undoubtedly experienced working within several agile sprints. Innovating incrementally. Delivering working software to the audience on a regular basis. Depending on your ‘vertical slicing’ (you know – using your invisible cake slicer), then some releases will contain a seemingly never-ending series of sprints, with some feeling like they’re never going to end. Particularly, when you are constantly rolling over tickets into the next sprint. Yet the word ‘sprint’ makes me think ‘fast’ or to be at a heightened ‘pace’. 0-60mph in 3.4secs. Daley Thompson realising he’s left the gas on (showing my age there). You get the idea.

Some sprints can run like clockwork. Others, not so much. Some can be more arduous (or ‘challenging’ if you want to put a positive spin on this) and feel painfully slow. You know what I’m talking about – right? So we testers are anxiously wanting to forge ahead and get on with testing new and exciting features, etc but we’re being blocked by something. Maybe a key dependency is still outstanding or somebody somewhere is proverbially dragging their heels and we’re waiting on them to finish something before we can proceed. Or we can test aspects of ‘x’, but ‘y’ and ‘z’ are still being developed. Urgh. So by the time we get around to testing ‘y’ and ‘z’ does this mean our test effort for ‘x’ will have been invalidated and we need to test this again. Or maybe you tested ‘x’ when you were told it was ready to test and later find out (usually at the eleventh hour), that the requirements changed or a last-minute UI change was made and it will need testing again. Around and around we go.

I’ve never been to a dance class myself but ‘slow, slow, quick, quick, slow’ does remind me of testing software. You have a few sprints which take time and effort with one thing and another…slow, slow…then all of a sudden there’s a big push for testing to be completed, including a comprehensive round of regression and we’re expected to find every single bug in there in double-quick time before a deadline which has got to be hit no matter what…quick, quick…so we release into live and we’re back down to slow again. And breath. Before we do it all over again. Sound familiar?

I’m not a control freak per se but testing as an activity needs careful planning and control. You cannot predict the future and so with the best will in the world, you are going to need to implement measures of control throughout the entire software development process. Think of a ship at sea and a storm hits. You’re still going to need to ensure the ship maintains it’s course and heading as much as possible and ultimately reaches the desired destination in a timely manner. Whatever you do, don’t go under. That would be bad.

Moreover, planning need not be a document heavy or time-consuming process nor should it be. Being ‘agile’ does not mean you can simply dispose with test planning. That would be just silly. Take the time to set expectations with the rest of the team. Ensure you have made it clear what needs to be tested, why it needs to be tested, outline your dependencies, give an indication of how much resource you will need to complete this test effort and estimate how long all of this is going to take.

Influencing skills are very important to affect positive change or even just to gain ‘buy in’ from the rest of the team. The more you open up the mystery that is the world of software testing, the more the team will understand and empathise with your situation. You may even start to hear team members saying “well, we still need to do this and we also need to think about how this affects test effort” or “we’ll need to ensure this is done ahead of starting test execution so we’ll be testing what we intend to ship“, etc. Pretty amazing eh?

If all of this helps to avoid the quick, quick mad rush at the end and avoids the regression testing window being squeezed then surely this is a good thing. It might not look as good on the dance floor but the test manager will hopefully applaud your choreography. Encore! Encore!

Moreover, it’s worth considering the merits of test automation and how this could be used to undertake regression testing in parallel to testing new functionality. Having the peace of mind that these automated tests (let’s call them checks) are passing as you progress within each sprint will give the team confidence as you work towards a release. This, in turn, may also reduce the amount of any manual regression testing you may feel is required. Like I keep saying, you’re going to need both manual tests as well as automated checks/tests if you’re going to dance in an agile fashion.

 

 

Release Candidate or Rubbish Candidate?

Release Candidate or Rubbish Candidate?

Hello again. This is my fifth blog post and I wanted to talk about builds. A tester without a build is a bit like a demolition expert without something to demolish. Not that we are in the business of destruction of course, though I have been known to pull a functional specification apart with my bare hands. Metaphorically speaking of course. I wouldn’t want to risk cutting a finger on a sharp staple or anything. Afterall, we testers are only human and are not the robotic machines some would have us think we are. Beep. Whirr.

So where shall we start with builds then. Maybe the frequency of them would be worth a mention. Daily builds are a by-product of continuous integration I guess you could say. Particularly within an agile environment. I don’t necessarily have a problem with this providing each check-in is suitably tested to stand a fighting chance of identifying problems as early as possible. The worry would be if a flurry of code check-in’s caused a big old mess. Thankfully, it’s good practise to develop new features, architectural changes, etc on a separate ‘branch’ of code, rather than on ‘trunk’ or ‘master’ in Git parlance. Get it tested and working properly on branch to give you the confidence required ahead of merging. Version control systems are great. Not infallible, but still pretty neat.

As a tester, I am all too familiar with being very careful over which particular build version I am testing against. I cannot stress how important this is. You wouldn’t want to expend a whole bunch of time and effort testing away only to find you’ve been using the wrong build. If I had a penny for the times I’ve overheard a developer use an expletive when they have no sooner compiled and created a build but realised something was not included and so had to create another build version. If you’re very lucky, the developer will make this known to you. In other instances, you need to keep your eyes and ears open. Think Superman fine tuning his super hearing ability. So when testing, make it very clear which build version you are testing, if only to prompt somebody to tell you  whether this is in fact the correct one to use. I always record the build version I’ve tested when test reporting and always specify the build version when logging a bug report. This is bread and butter type stuff for any good tester.

Typically, with each passing sprint, a tester will verify a whole heap of features. Not too many mind but a reasonable amount to plough through for the given sprint window e.g. 2/3 weeks, with the intention of releasing working software on a regular basis. This will inevitably culminate in a shiny final build version or rather a ‘Release Candidate’. Something with which you’d be prepared to ship. But wait, perhaps we need to ensure this is going to behave as expected prior to sending this out to the masses. Maybe a code change here or a code change there along the way (all aboard the release train..choo choo!) has affected something which was previously working. This would be known as a regression defect and regression testing is something not to be taken lightly.

Stakeholders need a certain level of confidence prior to making a decision whether to release something into the live environment. For example, you may have a huge audience who have high expectations, since your product or system is renown for its reliability and performance. You have a brand reputation and an image to uphold. Maybe something going wrong in live will have monetary implications e.g. fines, etc. Maybe it’s something new to market and any failure would detract users from returning in future. In an ideal world, regression testing is something with which you perform as a matter of routine. This needn’t always be scripted of course, but I prefer a balance. So for example, I would verify a new feature is working first and foremost. This may entail testing a series of scenarios for that feature ensuring the acceptance criteria is satisfied. Then over and above the user journeys described, I’d start to think outside the box and test around the ‘edges’ trying to capture aspects of behaviour which haven’t necessarily been considered upfront. Finally, before closing the ticket, I’d spend a little time ensuring everything else is working as normal. Making sure pre-existing behaviour has been unaffected. Checking it is all still hanging together, etc. That sort of thing.

Circling back to the Release Candidate then that I mentioned before. You’ve tested all the new features and you’re reasonably happy everything is in good shape. The sensible option would be to then conduct a risk-based regression test execution phase against the Release Candidate build. Putting the cherry on top as it were before releasing to the masses. It’s good practise, to talk. Talk to your developers. Pair up if necessary. Maybe request some release notes from them or if these aren’t forthcoming (at least in written form), get together and pick their brains. Discuss the areas of code that have changed. Ask them where they feel they perceive to be risk around these code changes and the impact this could have on areas of behaviour (whether functional or non-functional). Make notes. Share your experiences and recall past defects or issues or problems you’ve faced previously. Is there any way these can be avoided and/or improved upon this time around in the interests of continuous improvement?

Having had these conversations and reading any supporting material, the tester should be able to formulate a plan of sorts. This all sounds terribly formal but it really is only as complicated as you and your team wish to make it. It’s worth spending time assessing perceived risk and understanding what needs to be tested before you even touch the release candidate. Perhaps, whilst this planning is underway, somebody else could be running some smoke tests against the RC just to uncover any obvious issues in parallel. So you have understood what your regression test scope is and everybody knows what they have to do.

The starting pistol fires and they’re off! The risk-based regression test execution phase is underway. Oh wait. You’ve found a problem. Looks serious. It could be a..gulp..showstopper. You demo the issue to the team. It looks to be reproducible. You have a definitive set of steps to reproduce. The PM looks worried (even more worried than usual). The product owner has a frown on their usually calm looking face. “We need to fix this guys.” they say. “This can’t go live.” Silence. The team now faces a dilemma of sorts. Do you throw on the brakes and stop regression testing in the knowledge there is a priority fix winging its way in another release candidate or do you continue with your regression testing on the off-chance something else is broken and requires fixing. Hmm. Experience has taught me that it very much depends on (i) whether you can continue testing at all and/or (ii) how much test execution is remaining. There will be times where you have to down tools and wait for the next rubbish candidate..sorry..release candidate to become available. Then there will be times when you may as well finish this round of regression testing to see what else you uncover with the knowledge that a further round of re-testing and regression testing will be necessary. Regardless, it is a team decision as you are all responsible for the quality of software you release. There has even been instances of pushing something to live with the knowledge something is broken with the intention of issuing a ‘patch fix’ shortly after. Not withstanding safety critical software perhaps. That would be a bad idea.

The retrospective should be the place to provide feedback and as a team come up with ways of avoiding such problems arising in future. Maybe include an automated/manual test to capture the showstopper defect earlier in future if this ever decides to return or at least to ensure this remains in a fixed state. Maybe look at ways of preventing such defects arising if this is at all possible. Could you have realistically caught this defect earlier? Sometimes this is easier said than done however, if you consider the defect in question may only have been introduced in the latter stages of the development cycle. Could have been human error or it could genuinely have been unforeseeable. This is why we test right!?

Whatever happens, you don’t want to fall into the seemingly never-ending loop of release candidate candidates. Been there. Done that. Not got a t-shirt mind you. Ooh now there’s an idea!

 

Tester or Sommelier?

Tester or Sommelier?

Welcome to my fourth blog post. Blogging is rather cathartic and much like uncorking a bottle of wine, ideas of what to write about are simply pouring out of me. So, yes, wine and with that another cryptic title. The wine connoisseurs among you will undoubtedly know what a ‘Sommelier’ is already. For those that don’t then without going into too much detail, this is somebody who is like an uber wine waiter. Think fine dining. Think very specialist. But how does this relate to software testing I hear you cry.

Testers, Test Analysts, Test Engineers (no Testes references please), etc are specialist roles too. I’m going to deliberately circumvent ‘QA’ as this is something entirely different to software testing. Yes, we are special aren’t we. We need to be across pretty much everything that is going on within project(s). For example, we need to be across requirements; features; user journeys; use cases; expected behaviour; user acceptance criteria; user interface; navigation; layout; compatibility; the order in which these are going to be built; when something is ready for testing; how it is going to be tested (to include identifying any pre-requisites); dependencies; planning; estimation; execution; defect management; reporting, etc. The list goes on.

Much like being in a restaurant (not so much a Wetherspoons however), you may hear somebody ask the waiter “What do you recommend?” when pretending to know the difference between a bottle of Cabernet Sauvignon and Cabernet Severny. This is a particularly relevant question and is something which we as testers need to be able to answer as and when appropriate.

The key point I want to make with this blog post is to understand the test team do not and should not arbitrate what version of software is released. That’s not what we are here for. A common misconception. One of the test teams many responsibilities is surfacing information. Not only to just blindly surface, but to surface this information to the right people and at the right time. Going further, the tester will need to tailor their communication style/language depending on to whom they are reporting into. Whilst any good tester will revel in detail, it is important to be able to disseminate the right level of detail to the right audience. Then the timing of your updates needs to be considered. Invariably, stakeholders will want to know about high severity issues as soon as possible. So any Showstoppers you find or for that matter, any Blockers you encounter, need to be fed back as soon as possible. A high severity issue might warrant a fix right there and then, in which case further testing is deemed unnecessary or impossible until the next build is made available since you wouldn’t want to invalidate any precious test effort (at least no more than necessary – sometimes it is unavoidable). Or the flip side would be to surface the high severity issue asap so it is on the teams radar and the developers can start to identify what may have gone wrong in parallel to you continuing with your test execution against a consistent build version. Since, who knows, you may find more high severity issues which also require fixing. In which case, it’d be sensible to minimise the number of test builds or release candidates being created by addressing multiple fixes at once. Think killing two birds with one stone. There’s a real danger sometimes of falling into a vicious circle of never-ending release candidates being sent back into test because you are stop/starting all the time. Whilst there is a tendency to knee-jerk and fix a bug at the drop of a hat with the developer saying “Oh by the way here is another release candidate for you”, this can sometimes be counter-productive in the long run. Builds need to be carefully managed in such a way, the test team can gauge perceived levels of risk and factor this in when determining the scope of regression testing (assuming the latest fix or fixes are retested successfully of course).

Over and above surfacing information, the test team should be empowered to make recommendations. These could be recommendations formed on the basis of their test effort or indeed, from personal experience, or both. I know from my experience, that open issues might not have been fully understood by others in the team or what the downstream impact these may have on the audience. It could be the frequency of something happening (e.g. is it 100% reproducible?), which sways opinion on whether to release or possibly the ease of discovery itself (e.g. does the bug happen by following a common user journey or is it more of an edge case perhaps?). In the frenzy to close issues off, a good tester will need to be able to convey these considerations.

I tend to become emotionally attached to a product. I want this to be the best product as it can possibly be. I want the release to go smoothly and to rapturous applause from stakeholders and the audience alike. So there have been times where I have recommended a certain feature to be implemented or that we need to change the colour of something as trivial as say, a progress bar, to be consistent with the other progress bars within the product. All to make it better. Sometimes, I’ve had to persevere and at times be tenacious about something.

So there will be times where your test recommendations are actively sought from others and there will be times where you will feel compelled to make recommendations whether it is requested or not. More often than not it has been greeted with the immortal words “Oh yeah, good point, we hadn’t thought of that Steve”. The test team need to be aware of the big picture and bring this to the fore in team discussions.

I’m off to find a corkscrew and a large glass. Cheers!

Exterminate! Automate!

Exterminate! Automate!

Hello again. Welcome to my third blog post. I’d like to muse about test automation. Yep, that old chestnut. I’ll be honest and say I fell into the world of testing. At the outset I was given a choice, a proverbial fork in the road, as to whether I wanted to pursue a career as a developer or as a tester. I didn’t deliberate for very long. Though I have a curiosity about programming and understanding how things work, my passion for testing and general aptitude for breaking things far outweighed any possibility of becoming a developer. I recall my dad telling me as a kid “Steven. You could break an iron ball.” implying I broke the sturdiest of toys with considerable ease and/or had a natural propensity for identifying problems. The stage, as they say, was set. Little did I know, I’d be using such skills as a part of my future career in software testing.

Oh yes, the title. I need to explain that one. By now you’ll have realised I like to use attention grabbing titles for my blog posts. This one is for all you Doctor Who fans out there. I often hear cries to automate something. Sometimes this can almost feel incessant, much like the motorised dustbins you see chasing the good Doctor and his faithful companion. Not even a set of stairs can fool them nowadays. That’s progress for you. Well, with the onset of continuous delivery, the cries for automation seem even louder than usual. You’ll also be familiar with the usual bun fight over how long you’ll need to undertake regression testing and vocal members of the team saying “Can’t we just automate this and save time?” with their eyebrows at 45 degrees. We manual testers are soooo slow aren’t we. There’s been times when no sooner have I started test execution, I’ve been asked whether I have finished yet. Face palm. You either want the confidence to know your software is behaving as expected before releasing to your audience or you don’t.

I’ll try not to rant here, but ‘Testing’ is a discipline like any other. Take ‘Programming’ for example (or ‘Coding’ or whatever else you wish to call it) as another discipline. Or ‘Business Analysis’ as another. I’ve rarely witnessed anybody pressurising for a developer to finish coding something or for a BA to finish writing user acceptance criteria in double quick time. Yet us testers sometimes get a raw deal. Whether following a waterfall methodology or not, we’re usually the penultimate ones who need to look at something before a stakeholder presses the big red button to deploy into the live environment. Typically, it can feel like the stakeholder is looking over your shoulder tapping their watch in an ever-so-unsubtle way (and coughing at the same time as muttering the words “let’s ship it”). We often feel the squeeze.

So yes, we want to release more frequently. Yes, we wish to develop and ship software like a well oiled machine. Yes, we want the audience to benefit from new and exciting features as soon as possible. We all do. Everybody in a software development team should all share these goals. Some of this is easier said than done however. Automation is not for the faint hearted. Another cliche for you would be to say ‘fools rush in’. Your team needs to give this a lot of thought. Think about what you say?

I’ll cut to the chase and say you are more than likely going to need both manual and automated test effort. Well, you can’t automate everything. Just as you can’t realistically test everything either. You could certainly try but you’ll soon come to realise that the effort employed far exceeds the value in trying to automate certain tests. Then there’s the whole human element to consider. As I explained earlier, some testers are naturally pre-disposed to identifying problems and breaking things. You can’t automate that. Machines (Daleks?) are great aren’t they for some things but they certainly have their flaws. You can’t automate experience, instinct, intuition, etc. These traits are what help testers identify defects a machine couldn’t possibly uncover. If I had a penny for every time I demonstrated a defect to somebody and be asked “how on earth did you do that?”. Writing scenarios, coming up with acceptance criteria, etc is great but there will be user journeys that haven’t necessarily been considered upfront. Then there’s the subtle nuances which affect behaviour across different platforms and devices. Something may work just fine using one browser/device but be completely screwed on another – would your automated tests always capture these instances or possibly give you a bumsteer if everything is showing as passing I wonder? Oh look everything is green. So false positives are also something to think about. What else is there to think about then? How about setup and maintenance. This is a real doozy. You’ll need to carefully consider which automated solution you opt for. Not only for expense in the conventional sense, but the cost of maintaining this for the forseeable. Automated tests are invariably brittle (particularly when testing at the UI level). Are you going to use live data or canned data (this can also lead to false positives)? What if something somewhere changes with or without your knowledge and breaks your valuable tests? Who is going to pick things up when they fall over? What if there are only a chosen few in the office who really understand how it all hangs together and they aren’t available? I mean they could have been taken ill or have left the company altogether – taking that precious knowledge with them.

Don’t get me wrong here. There is absolutely a place for test automation. There are benefits to be had for sure. Imagine having those mind numbingly tedious tasks removed by an automated solution freeing your time up to manually test the more complex stuff (the good stuff). In my experience, you need to weigh everything up and agree with your team what you definitely need to automate and what you definitely do not need to automate. The caveat here would also be to say it’d be wise to identify a middle ground as well. So maybe there are tests you’d like to automate but at a later date maybe. Start with the high value, straight-forward to automate type tests. The ones which are going to serve you by being run repeatedly against each new build version. My previous blog talked about high value smoke tests and these would be a good place to start.

Whether automated or manual. The whole point of testing is to find defects and exterminate them! If you’ll excuse me I need to find somewhere to park my TARDIS (I’m actually heading out for some lunch).