My seventh blog post! I don’t think I’ve written about bugs enough. In my inaugural blog post I touched upon how these can generally be perceived but not about what makes a good bug report (if there is such a thing) and the typical life cycle a bug report finds itself subjected to once it has been raised.
So first things last, what is a bug? What is a defect? What is an issue? In my experience, terminology differs across teams and even across different organisations. Oh yes we could babble on about definitions but if you are going to stick with your teams common vernacular, is there any point?
The get out of jail is well why not use whatever language works best for the team. Hmm. There’s a part of me that wants to champion best practise and as politely and as carefully as possible avoid bruising egos by subtly influencing (dare I even say, educating) others in the correct use of terminology. Let’s not get too side tracked here. However, remember you are the tester. You are the one who has been on the training courses, read the books, and got the t-shirt. A lot of people outside of the test team ‘think’ they know about testing or understand everything there is to know about testing but are often quite misinformed. They could have been a PM for donkeys years and think they know it all – not always. I recall a developer once exclaiming that they could do a testers job. Oh really. The irony was they might as well have tried since their coding abilities left a lot to be desired (think lot’s of bugs). Anyway I digress.
A defect, simply put, is a problem which hinders a particular aspect of software to perform a particular function. A defect can be considered as something which deviates from the expected result and/or the original requirements. I find that some teams would rather use the word ‘bug’ than ‘defect’. Or that they are used interchangeably. Strictly speaking they are two different things. Defects can be caused by a variety of things. For example, a defect could be caused by a mistake within the code. Such mistakes are referred to as ‘errors’ or ‘bugs’. Or perhaps a defect has been caused by an ‘error’ in the design documentation. Whatever you call them, nobody should be looking to apportion blame. Though, some developers can be rather defensive when a defect is raised. Afterall, we are only human. Though it’s worth remembering we are also professionals and I like to think every facet of a software development team is working together towards a common goal. In contrast to the defensive developers, I’ve had the pleasure of working with developers who are delighted the test team have spotted something which requires a fix before release. The upshot being the release stands a better chance of being a success and we all end up basking in the glory bestowed upon us from stakeholders. One team. A cliché maybe, but it’s so true.
So something is not as it should be. What do you do? Well, as Bob Hoskins once said in an old B.T commercial “it’s good to talk” so providing there’s somebody around to talk to about what you’ve observed, think about mentioning this to the developer who worked on that particular feature. In the interests of a balanced argument, this may not always be practical let alone possible. Since, again, in my experience I’ve been in situations where you try to demo something or speak to a developer about something you’ve seen and they bite your head off as they’re in the middle of something or are heading off to a meeting shortly. Or they see you coming and they hide under their desk. No matter how hard you try to collaborate or try to talk things through, there will be instances where you’ll need to record something you’ve seen somewhere. You can’t remember everything in your head (remember we’re only human) and so making a note of this in a notepad, on a whiteboard or within your defect management system (a.k.a bug tracking tool) is inevitable. Less we forget, having these recorded will pay dividends should you need to reference these again in future or rely upon these when something goes wrong in live and you have evidence to show this was identified during testing but was not fixed ahead of release.
What sort of information do you need to capture in your defect/bug report? Well I always start with a meaningful/descriptive title. Something concise if possible. In terms of contents then think about including relevant information such as the following (where appropriate), which build version you are logging the defect against, what piece of hardware were you using; using what operating system + which version; within which environment; using what data; what steps did you follow; think about including network connectivity information (e.g. Wi-Fi, Cellular, Broadband, Offline, etc); what is the expected result; what is the actual result; indicate whether it is reproducible or not; if it is not 100% reproducible then how frequent is this happening; whether the issue is currently affecting the live environment or not; screenshots; crash logs; video; any recommendations you may have; links to related defects; etc. Listen to developers and try to provide as much information as possible to help identify a cause and fathom how to fix the defect. Depending on your weapon of choice you may be able to force certain information to be captured using mandatory fields.
I need to mention priority and severity. It is a good idea to rate defects for both of these respectively. However, I would argue that it should be the tester who indicates how severe the defect is and priority should be agreed with those responsible for the product or system. It could well be the case that a defect is not going to be prioritised for fixing at all. Which is fine (although at times disappointing) but at least you have done your job by informing what can happen. If the powers that be decide not to spend time and effort fixing the defect then so be it. Sometimes you just have to suck it up. At least you have a record of it though right? Right. By testing and adequately reporting your results, stakeholders can make informed judgements and decisions. Priority levels can typically be indicated on a numerical scale. For example a P1 would be the highest priority and maybe a P4 is the lowest in some organisations. So using this as an example, a P1 would be considered a ‘Showstopper’, a P2 would be considered a ‘High’, a P3 would be a ‘Normal’, and a P4 would be a ‘Low’ priority defect. Then you’d have another scale for severity levels (you get the idea).
So you have a bunch of open defects. Now what? Well, you’re already actively communicating these to the team and have these visible on your team whiteboard or within your defect management system (a.k.a bug tracking tool). Great. However, as another old saying goes, ‘you can take a horse to water but you can’t make it drink’. You need to be ensuring open defects are being ‘managed’. There is sometimes a danger of allowing these to pile up and I personally like to keep on top of defects before they start to get on top of you. One such method is by scheduling a ‘defect/bug triage’ session. These don’t have to occur on a daily basis but use your judgement and get people together as and when you feel is necessary. Maybe have representatives from different areas of the team present e.g. Product, Dev, UX&D. Start by ordering your open defects in severity order and go through each one (which does not already have a priority first and foremost) and strive to gain agreements as to whether these should remain open or not. If they are to remain open, assign a priority and agree what are the next steps. Maybe something needs testing further to aid decision making or maybe there is enough detail for a developer to investigate and hopefully go away and fix, etc.
Now you should have a distilled set of open defects. They’ve been prioritised. You can smell the fixes in the air. Subsequently, the defects which have now been fixed should have their status set to reflect they’re ready for retest. In an ideal world, the fix will be successful. Your retest has passed and the defect can now be closed. Often, defects will not always pass retest the first time round. They get reopened and the developer needs to take another look. There has been times when despite a concerted effort to fix a defect, the problem refuses to go away. This would be a perfect time for another triage session. Explore alternatives for example, maybe there is a workaround for the user or maybe a minor design change could render the problem null and void. Or perhaps, the effort to fix something far outweighs the value, and so you may end up agreeing to mark something as a ‘Will Not Fix’. This happens from time to time. At least you (or rather the team) tried eh?
Defects can be valid and sometimes they can be invalid. Maybe the tester has misunderstood the expected result or maybe the requirements have changed but nobody thought to tell the tester who raised the defect. This can happen. In which case these defects are marked as ‘Invalid’. No big deal.
So a defect can be open, it can be invalid, it can be fixed and ready to retest, it can be closed, it can be reopened, or it can simply be a will not fix.
What about blocking defects I hear you cry. There can be defects which prevent any further testing activity taking place and are then considered to be blocking the test team. You may also find you are blocked on being able to fix a defect and have to wait for something else e.g. a third-party component being updated before it can be retested.
How old is the defect? I’ve seen instances of defect backlogs being allowed to accumulate lot’s of ageing open defects. This bothers my OCD somewhat. I like things to be kept clean and tidy. I would argue anything older than, say, 6 months (just a ball park figure) should be closed off since you need to ask yourself whether you’re ever going to realistically fix them.
I think of defects as a form of currency. Granted, I probably wouldn’t be able to buy anything with them but they are valuable nonetheless. By surfacing these, informing others that they exist, by fixing them and slowly but surely building confidence in your product or system – can you really put a price on that?