Can you program a Manual Tester?

Can you program a Manual Tester?

I’ve blogged about test automation before but I never touched on exactly how this is put into action. I’ll get out of jail early and say this is all based on what I’ve seen/heard/experienced and it won’t be the same for everyone.

As I’ve said before I was given a choice back in the day – did I want to become a developer or a tester – it never really occurred to me that you could be both (in a manner of speaking). The illusive hybrid individual I refer to can be called all sorts of things out there in the big wide world e.g. a Developer-in-Test, an Automation Tester, QA Automation Engineer, etc. Though these folks usually only ever get involved in or get swept up in all things test automation whether setting up frameworks, writing tests, maintaining the environment, etc. It can quickly become all consuming. Many companies seem to be recruiting heavily for these roles of late (let’s call them Unicorns) but you have to wonder whether their alleged needs are at best short-sighted and worse, run the risk of diluting the true skill of manual testing. Like I’ve said before, a healthy mix of both manual & automation can be fairly powerful when done right.

Dependent on the particular testers background, then they may have had prior experience of writing code. This would (I guess) make the transition over to being able to write automated tests that bit easier. Since from what I’ve seen automated tests often require the ability to write code. Moreover, in addition to writing code, there needs to be an appreciation for knowing whether the code is doing what you want it to do. Is it actually testing whatever it is you’re wishing to test. The word ‘assertion’ is often used to describe this aspect of behaviour. Plus, has the automated test been written as well as it could be written? Will it run fast? Will it garner reliable results? Will you trust those results?

But hang on a tick. What if your experienced, highly skilled, highly motivated, love-what-they-do testers, defect finding geniuses don’t come from a background ridden with setting variables, writing methods and asserting x, y, and z? Are these (and I hate to christen them) ‘manual testers’ now dinosaurs roaming the earth in search of coffee? I think not.

Companies can come to many realisations. Such as ‘wow’ this automation testing business is more trouble than it’s worth and back out of it completely. Others, may come to realise they initially went with the wrong tooling or wrong strategy. Nothing wrong with that I guess though it could become a costly mistake – but here’s the thing – let’s say you have a bunch of great ‘old fashioned’ and ‘wise’ manual testers who really know their onions. You also have equally great developers. You may have the budget to ‘recruit’ in a Unicorn-in-Test to come and miraculously implement all your automation needs (*your sarcasm radar should be bleeping by now*) but sooner or later there’ll be a stumbling block owing to your manual testers being unable to write automated tests, read and understand automated tests, maintain and trust automated tests.

So what do you do? I’ve yet to read anything online about the definitive answer to this dilemma. Some may have a genuine desire to learn how to code – but learn which language, do they learn in work time, how will they learn, who will support them, etc. Some may have dabbled in writing code and want to learn more. Others might not wish to have to re-train to become a developer. Some might feel pressurised to learn but have no idea where to start. Maybe it might be better to have your manual testers pair with developers and create them together. The developer can do what they do best – write code – whilst at the same time the tester can offer an insight into what needs to be tested, keeping a close eye on coverage and regularly assessing risk.

It’s a tough one. I get inundated with recruiters contacting me and several have suggested companies ask for heaven and the earth without really thinking about it. If a job description comes across as overly automation centric then they may be shooting themselves in the foot and inadvertently losing out from hiring truly great testers who are likely to shy away in fear of not ticking all the many (seemingly never ending) desired skills checkboxes.

Ultimately, it should be a team effort and companies should never really expect to have a single Unicorn-in-Test come in and do everything single handedly. Nor should companies under-estimate the very real threat of focussing too much on automation at the expense of manual testing.

Personally, I hate having the word ‘manual’ as a pre-fix. ‘Manual’ testing is so much more than a ‘manual’ tester manually running a test. Long may they roam planet Earth!

Advertisements

The Curious Case of the Test Case

The Curious Case of the Test Case

Welcome to my eleventh blog post. I’d like to turn our attention to test cases for a moment. Of late I’ve both read and heard many people within and outside the test community ponder whether these are in fact required. Well, this is my personal blog and I’ll give my answer to you straight. Yes. Of course you do. Why? Allow me to explain.

At the start of the year I joined a new team. Like all new teams, there are established ‘artefacts’ that routinely need to be tested. I could have said components. I could have said products. I could have said systems. Whatever. Let’s generically refer to items under test here as an artefact.

These could be legacy, well established artefacts that continue to be maintained/improved upon. There maybe new versions to be upgraded to, which, in turn may bring new features along with it. If you’re lucky the shiny new version may even address a few niggling long-standing bugs that have remained open for longer than originally anticipated.

Equally, you might be asked to test something completely different. Something entirely new. Something of which you’ve never clapped eyes on before.

Now here’s the rub. As a new team member there were several things I hadn’t clapped eyes on before. Whether it was something which had existed for a while or not. It may as well have been brand new to me. Since, as far as I was concerned, it was. What is it? What does it do? How do we go about verifying it’s behaviour? Are there any pre-requisites as far as the test environment or test data for example are concerned? I had lot’s of questions.

In fairness, the team have been great. Very supportive and parachuting in has given rise to plenty of verbal communication. However, we have a job to do. There’s work to be delivered. Time is always of the essence.

Thankfully, when it comes around to testing several of these artefacts, particularly when performing regression testing (since many of these have been around well before my arrival), I have the benefit of being able to reference their respective test suite(s) within our Test Management tool. This has proven to be an invaluable resource. These tests have allowed myself to become familiar with expected behaviours for a variety of scenarios and have provided a practical opportunity to exercise these tests with minimal support. For want of a better turn of phrase, they’ve enabled me to hit the ground running.

Before we go any further I am not saying you must always have a test case with definitive steps to follow. I’ve found a successful test strategy employs a proverbial smorgasbord of different elements. There has been many a time I’ve found a bug that was never originally conceived at requirement gathering stage or has been found as a direct result of running a specific test case. That said, I’ve also identified many bugs as a direct consequence of executing a set of test cases – I often find these either fail outright or you find something out of the ordinary as a consequence of running a particular test case. The test happened to have instructed you to perform x, y, and z which as it happens passes but you also found something unexpected happens if you slightly modify the steps. In which case, you might have been the first to realise this happened and you might choose to update the test case so this would always stand to be caught by others running the same test again in future.

I mentioned ‘others’ there. By others I mean your fellow compadres, comrades, colleagues, team members. If left to our own devices and regression test an artefact in an exploratory manner, there’s a chance that your test approach may fall foul of inconsistent test coverage. Somebody might simply forget to regression test a particular scenario or aspect of functionality. For my money, if you are at least executing the same set of test cases then this would go a long way to mitigate any such gaps in your coverage. Though, exploratory testing is very powerful. You or somebody may find something worth fixing as a direct consequence of not following any pre-defined steps. In my experience you need both.

I’ll be coming on to granularity next but just to recap – I would recommend you look to maintain a test suite but this must not blind you to thinking all you have to do is to run those tests and bingo. Testing as we all know is an art form. A skilled tester will always be on the look out for other scenarios to explore, etc along the way as part of their test execution cycle. The balancing act is ensuring you verify the high value stuff first and foremost. Don’t waste too much time and disappear down a rabbit hole trying to break something and supposing you do, there’s a real chance the steps to reproduce are so far out there on the edge of the universe that nobody cares.

So granularity then. I’ve opened some test cases, looked at the convoluted pre-requisites, glanced at the one billion steps, the way it’s written, language used, etc and sighed for about ten minutes. Not a good sign. Test cases ought to be easy to digest. Easy to understand. Preferably easy to run but that’s always in the eye of the beholder (only easy if you know how but a well written test case should make it easy to learn from). Then I’ve had test cases at the other end of the spectrum that literally contain one step or duplicate something you already did in the previous test case. It can be infuriating. So if you are not careful you could have a test suite which has a few enormous test cases within which take forever to plough through or have a test suite which has hundreds of tiny ‘baby’ test cases which only serve to inflate the number of individual tests needed to be run and also throw up challenges for how to maintain them.

Stop and think before putting your test suite together. Can you afford to consolidate a few tests where it makes sense to do so? Remember you can still do this and test exactly the same sorts of things that’d normally be executed individually but you’d stand to verify these in a much more efficient manner. This in turn makes it easier to manage from a maintenance perspective e.g. you may only need to edit one or two test cases as opposed to several. Care needs to be given to not end up with too much in a single test. Use your judgement and common sense when deciding.

Personally I like using bullet points. These have a nice way of making it clear what to do without feeling like you are shackled to them. You should still have that sense of freedom to explore further. Add notes of interest. Any ‘gotchas’ to be on the look out for. Nothing must detract from the whole point of running the test in the first place. It must verify expected behaviour and flush out as many issues as it possibly can. Some of these might require fixing. Some might not. Surface the information. Make your recommendations known.

Lastly, I find using test cases are a good way of radiating to the wider team what we intend to cover as part of our regression test effort. It’s proven to be of enormous interest to the development team to have visibility of this and offer their support when things invariably change and we are all looking to mitigate any perceived risk before release.

 

Curiosity Killed the Tester

Curiosity Killed the Tester

Hello again. I’ve been unusually busy these past few months plus I’ve been waiting for inspiration to strike and boy oh boy it certainly did.

It won’t be the first time you’ll have read something about this but I wanted to share my take on it for you – intrigued? Read on.

So. Testing eh? We love it. I know fellow testers love it. We understand it. We continue to learn about it. We strive to improve our approaches. We read books. We attend seminars and conferences and such like. We want to make the world a better place.

These days, and this is a fairly broad interpretation and heavily context dependent, a concrete set of requirements, a clearly defined set of acceptance criteria, and a wholly unambiguous, complete understanding of exactly what to expect as far as behaviour is concerned (that’s software behaviour) is something only dreams are made of.

While this maybe true and even if it isn’t testers should never ever be afraid of asking questions. Thinking back to my days as a trainee, it was always drummed into us that ‘testers never assume’. This is not an excuse to flick your common-sense switch off however.

With the Agile manifesto ringing loudly in everybody’s ears nowadays, individuals and interactions are the order of the day. Talk to people. Talk to yourself (don’t go mad though). Get up from that coffee cup strewn desk of yours and wander over to speak to a fellow colleague. Ask that question. Collaborate. Those that will remember Bob Hoskins will recall his BT advert catchphrase “It’s good to talk.”

I like talking. I was always told off at school for talking too much in fact. I soon learned I needed to concentrate whilst working so I left my talking for later. As a tester (fast forwarding many years forward) I still need to concentrate when working on something. As much as I’d love to talk about last nights “I’m a Celebrity Get Me Out Of Here” I have a job to do and invariably a deadline to hit.

However, like we were saying earlier testers need to ask those questions. Sending an e-mail can sometimes be necessary but when the person you need to ask is sat in front of you (or next to you, or behind you) go and build those relationships and speak to them. Testing is a service after all. You’d be demonstrating to colleagues how passionate you feel about the work you do. Though I have to say not every team member understands Testing. I guess I don’t know everything about Business Analysis, Development, Product Management, etc though when somebody from a different discipline asks me a question I respectively and politely try to provide an answer. I want to help. We’re all trying to deliver aren’t we? Since I understand that they are a professional at what they do. The same as I am a professional at what I do. There’s a mutual respect (or should be). Moreover it’s just nice to be nice – isn’t it?

Unfortunately not everybody in a team will be willing to just be nice (maybe they’re having a bad day) or appreciate the fact that your question is in relation to the work that you are doing (since they are probably oblivious to why it’s relevant or just fail to show any willingness to want to understand why you are asking the question) and may decide to just give you the bum’s rush and send you on your way with a flea in your ear.

Bum’s rush? Flea in your ear? To give you the brush off. To not be helpful or understanding. To do or say anything to get rid of you. This flies in the face of team working. Not only will this serve to make you think twice about approaching them in the future, it is sowing the seeds for a total communication breakdown and lack of team spirit.

A classic example for you. You’re in the middle of test execution. Maybe days, weeks, down the line. Then all of a sudden you notice something different. You haven’t seen this behaviour before. Why is this happening? There could be a host of reasons. So like any good Vulcan, you apply logic. Maybe something has changed within the code base? Let’s face it how often have you had to deal with ‘changes’ you were not made aware of in the middle of your testing owing to non-existent release notes. So who should you ask? Maybe the Lead Developer but they’re not around. You see the Project Manager sat in front of you. So you wander over and ask in person. Sounds like a sensible course of action.

What if the PM immediately replies with “It’s not a defect!” in a less than friendly tone? Yet you never said it was. Since at this point in time you’re trying to establish expected behaviour and determine whether or not a change had been implemented in a recent build or not. “Why?” “What difference does it make?” the PM may ask. Failing to understand the testing implications of a potential code change that we were not made aware of. You wander back to your desk bemused.

The example above could have been so different had the PM handled it differently. The response could have been “Oh that’s odd I wasn’t aware of any changes for this. I’ll check for you.” In turn you would have thanked them for their help and continued with your investigation into other possible causes.

Testers need a thick-skin. A level of resilience. It can be so frustrating.

So yes, absolutely, speak to people. Ask those questions. Just be mindful of the fact that not everybody understands or wants to understand why you are asking.

I’ll leave you with something else we learned at school…play nicely.

 

 

 

 

 

 

 

 

Mobile Defects Ring a Bell

Mobile Defects Ring a Bell

Hello! It’s been a while. I’ve been rather busy of late but whilst I’m waiting for the next piece of code to break, I thought I’d revisit my software testing blog.

It has recently struck me that I’ve been testing within the mobile arena for the last 5 and a half years. Don’t ask me where time goes but they do say it flies when you’re having fun and I must say I continue to enjoy mobile testing.

During this time I’ve tested many, many, wonderful (and not so wonderful) native mobile applications across iOS, Android and Fire OS. I’ve also tested mobile web offerings as well to include HTML5 based games most recently.

Invariably, a solid test approach will include testing across different devices, running different versions of operating system, featuring different screen densities, different methods of connectivity (Wi-Fi, Cellular, Offline, AirPlane Mode), different chipsets, navigation, looking at installation and upgrade paths, permissions, saving data, storage, security (e.g. DRM downloads), changing orientation, multi-tasking, controls, gestures, interrupt handling, performance, stability, external playback support (e.g. AirPlay, Chromecast), launching, relaunching, accessibility support, usability, layouts, statistics, gameplay, etc. The list goes on.

Despite all these different factors to consider when performing your mobile testing, a mobile related defect can be distilled to include a common set of criteria. I often champion the use of a defect template in order to promote ease of use and consistency. There is nothing worse when triaging defects to find a whole swathe of information (often crucial information) missing. When people start to assume which build version has been used or confusing the steps to reproduce for example this can generally cause lot’s of mis-understandings (Chinese whispers – not sure if this is politically correct or not) as well as ambiguity. So it’s better to get all the relevant information captured carefully balancing the needs of being detailed, informative and as concise as possible.

So let’s try to list some of the useful fields to include within a mobile related defect.

  • Title – I like so follow some sort of naming convention. For example, if it’s an issue concerning a crash I’ll pre-fix the title with ‘Crash – …’
  • Label – This depends on your defect management tool weapon of choice but classifying defects by either pre-fixing the title and/or adding a meaningful label may help when revisiting the particulars in future or when reporting. Again, with a crash you might label them with ‘app-crash’ and these can be reported on for stakeholders.
  • Device – Self explanatory but record which model & generation of model you used e.g. iPad 2
  • Operating System – Record the name & version of O/S you used e.g. iOS 9.3.2
  • Build Version – Record the version of build you have tested. This is vital.
  • Connectivity – Wi-Fi SSID? Cellular connection? Offline? AirPlane Mode enabled? Transition from one to the other?
  • Steps to reproduce – I like to sequentially number each step. The reader should be able to understand exactly what you did (that’s assuming you even know what you did!).
  • Expected Result – Try to avoid saying it should do this, etc. Say it WILL do something. Expected behaviour should be clearly defined and understood.
  • Actual Result – Your big chance to explain what the hell happened. Try to include whether you could recover functionality and how or whether you crashed and burned.
  • Frequency – Does this happen every single time? Or is it once in a blue moon? Indicating how prevalent something is often has an influence on the likelihood of developer time being allocated to investigate and/or fix a probem.
  • Live Y/N – Another useful one. Is this already in live? If yes, then has the audience been making any complaints about this? If not then it’s likely to be a lesser priority to fix if at all.
  • Screenshots – OMG. A picture can say a thousand words. If you have the ablity to provide exhibit A and evidence something has screwed up then it will not only serve to prove you haven’t lost your mind and are talking cobblers but it often cements the nature of a problem (particularly with stakeholders) and can influence decision making e.g. That looks terrible. We need to fix this.
  • Video – Sometimes it isn’t always possible to illustrate something by way of a single screenshot e.g. Video playback is juddering, etc. In these instances consider providing a short video (remember to clean your fingernails beforehand).

Right, well the above should serve you and your team well. It’s not exhaustive obviously but if you start to routinely provide the above each time you capture a defect then it will pay dividends. Trust me. We’re all human (unless you’re some cyborg tester sent from the future) in which case you can’t remember everything. Not everybody can be co-located together so logging these on a shared system helps visibility. Moreover, the number of times somebody has come to me asking questions about an issue I raised weeks, if not months ago is reason alone to have something to refer to or else you’ll find you simply cannot remember all the relevant details….now….erm….what was I saying?

 

 

 

Back to the Bug

Back to the Bug

I’m going for as many blog posts as I can muster whilst I have some time to spare. My idea for this post is nothing new but it’s one I can relate to. I’d love to have been Marty McFly (or maybe Biff Tannen sometimes as he had a cool car) but testing and time travel can share some commonalities. For example, have you ever felt like you’ve had the same conversation amongst your team or maybe the same conversation across different software development teams? Or perhaps you’ve seen a defect before or the symptoms are scarily similar to one you have encountered before – can you remember what the solution was? What was the cause? Did you spot this again by sheer coincidence or have you put preventative measures in place to ensure a particularly nasty looking bug would be identified should it ever come back to life?

Yes, that’s right ladies and gerbils, welcome to the world of ‘Defect Prevention’. As much as we like seeing defects fixed and patting one another on the back for a job well done after release, have you taken some time to reflect on how these came to..ahem..pass (or fail would be more appropriate in this context). It’s not a witch hunt though. Nobody should be pointing fingers here. This is about discovering what led to the error being introduced and seeing if there are ways and means of mitigating such circumstances reoccurring in future.

Maybe it was human error. Somebody somewhere screwed up. Yep, it happens. Maybe it was genuinely something unforseen. Again, this happens. However, it is important to try to learn from these situations as best you can. For example, it could have been the case that you’d bitten off more than you can chew when sprint planning perhaps? Maybe you hadn’t accounted for resource being spread so thinly due to ‘noise’ and what little resource you had available were coding away like headless chickens and mistakes were made or there hadn’t been time to code review ahead of test. So perhaps think about ways of shielding your resource from such ‘noise’ and put measures in place to ensure time is allotted for code reviews taking place. Or you need to more closely assess levels of risk in future when tampering with the code base, particularly when there’s been a lot of refactoring going on in parallel to other ‘changes’. Maybe stripping out oodles of code mid-sprint wasn’t such a good idea after all and/or implementing a shed load of significant new features in one go wasn’t the best approach. Could you have done this at a more suitable time or at a more manageable velocity? Sometimes unforseen errors can happen when there are changes going on elsewhere (within the mystical back-end or higher up in the snowy regions of the stack) that you had not been made aware of. So communication problems (or just a general lack of it) can give rise to wheels coming off your product or system. Could you perhaps better manage or certainly look to improve this communication with your external dependencies in future? The list goes on..and on..and on.

I’m a big fan of continuous improvement. No, not because this often overused and underrated term just sounds good either. It’s just common sense. Retrospectives are one of the perfect times for development teams to come together and share meaningful feedback. No mud-slinging. No getting carried away with the jumbo pens and colourful post-it notes. Simply focusing on learning from your previous experiences, good and bad, is just sensible. Have those conversations with one another in a positive (hey, we’re all just trying to do a job here and work as part of a team) kind of way. This is much more likely to affect positive change. The flip side of this is to unwittingly encounter the same issues again and again. Around and around we go.

Back to the Bug though. That is the title of my blog post I guess. Say you’ve come across a real doozy. It turns out to make quite the impact. It’s a real big deal. One the whole team is going to have to deal with. Heads are being scratched but there is light at the end of the tunnel. Let’s try to make sure this sort of thing doesn’t bite us again in future should be something you and the rest of the team are thinking. Should this way of thinking only apply to the Showstoppers though? Well, I would like to think once you get into the habit of not re-inventing the wheel every time and not feeling like you are fire fighting to get across the line, it should become second nature. Often, something fairly quick and easy is all that is needed to save you having to suffer the same fate again.

I’m a realist though. You’re never going to get it right every time. Nor can you hand on heart say something won’t happen again. However, if it does and you have the solution readily available, then maybe it won’t bite you quite as hard this time around.

Remember that doozy of a bug we spoke about. Supposing that was only spotted at the eleventh hour. Could you maybe run a test to try to capture this particular problem earlier in future? You can try I guess. At least you’ve tried. Yoda would say “do or do not, there is no try” but we’re not going to mix Star Wars metaphors with Back to the Future are we. Scared? You will be..you will be.

A Bug’s Life

A Bug’s Life

My seventh blog post! I don’t think I’ve written about bugs enough. In my inaugural blog post I touched upon how these can generally be perceived but not about what makes a good bug report (if there is such a thing) and the typical life cycle a bug report finds itself subjected to once it has been raised.

So first things last, what is a bug? What is a defect? What is an issue? In my experience, terminology differs across teams and even across different organisations. Oh yes we could babble on about definitions but if you are going to stick with your teams common vernacular, is there any point?

The get out of jail is well why not use whatever language works best for the team. Hmm. There’s a part of me that wants to champion best practise and as politely and as carefully as possible avoid bruising egos by subtly influencing (dare I even say, educating) others in the correct use of terminology. Let’s not get too side tracked here. However, remember you are the tester. You are the one who has been on the training courses, read the books, and got the t-shirt. A lot of people outside of the test team ‘think’ they know about testing or understand everything there is to know about testing but are often quite misinformed. They could have been a PM for donkeys years and think they know it all – not always. I recall a developer once exclaiming that they could do a testers job. Oh really. The irony was they might as well have tried since their coding abilities left a lot to be desired (think lot’s of bugs). Anyway I digress.

A defect, simply put, is a problem which hinders a particular aspect of software to perform a particular function. A defect can be considered as something which deviates from the expected result and/or the original requirements. I find that some teams would rather use the word ‘bug’ than ‘defect’. Or that they are used interchangeably. Strictly speaking they are two different things. Defects can be caused by a variety of things. For example, a defect could be caused by a mistake within the code. Such mistakes are referred to as ‘errors’ or ‘bugs’. Or perhaps a defect has been caused by an ‘error’ in the design documentation. Whatever you call them, nobody should be looking to apportion blame. Though, some developers can be rather defensive when a defect is raised. Afterall, we are only human. Though it’s worth remembering we are also professionals and I like to think every facet of a software development team is working together towards a common goal. In contrast to the defensive developers, I’ve had the pleasure of working with developers who are delighted the test team have spotted something which requires a fix before release. The upshot being the release stands a better chance of being a success and we all end up basking in the glory bestowed upon us from stakeholders. One team. A cliché maybe, but it’s so true.

So something is not as it should be. What do you do? Well, as Bob Hoskins once said in an old B.T commercial “it’s good to talk” so providing there’s somebody around to talk to about what you’ve observed, think about mentioning this to the developer who worked on that particular feature. In the interests of a balanced argument, this may not always be practical let alone possible. Since, again, in my experience I’ve been in situations where you try to demo something or speak to a developer about something you’ve seen and they bite your head off as they’re in the middle of something or are heading off to a meeting shortly. Or they see you coming and they hide under their desk. No matter how hard you try to collaborate or try to talk things through, there will be instances where you’ll need to record something you’ve seen somewhere. You can’t remember everything in your head (remember we’re only human) and so making a note of this in a notepad, on a whiteboard or within your defect management system (a.k.a bug tracking tool) is inevitable. Less we forget, having these recorded will pay dividends should you need to reference these again in future or rely upon these when something goes wrong in live and you have evidence to show this was identified during testing but was not fixed ahead of release.

What sort of information do you need to capture in your defect/bug report? Well I always start with a meaningful/descriptive title. Something concise if possible. In terms of contents then think about including relevant information such as the following (where appropriate), which build version you are logging the defect against, what piece of hardware were you using; using what operating system + which version; within which environment; using what data; what steps did you follow; think about including network connectivity information (e.g. Wi-Fi, Cellular, Broadband, Offline, etc); what is the expected result; what is the actual result; indicate whether it is reproducible or not; if it is not 100% reproducible then how frequent is this happening; whether the issue is currently affecting the live environment or not; screenshots; crash logs; video; any recommendations you may have; links to related defects; etc. Listen to developers and try to provide as much information as possible to help identify a cause and fathom how to fix the defect. Depending on your weapon of choice you may be able to force certain information to be captured using mandatory fields.

I need to mention priority and severity. It is a good idea to rate defects for both of these respectively. However, I would argue that it should be the tester who indicates how severe the defect is and priority should be agreed with those responsible for the product or system. It could well be the case that a defect is not going to be prioritised for fixing at all. Which is fine (although at times disappointing) but at least you have done your job by informing what can happen. If the powers that be decide not to spend time and effort fixing the defect then so be it. Sometimes you just have to suck it up. At least you have a record of it though right? Right. By testing and adequately reporting your results, stakeholders can make informed judgements and decisions. Priority levels can typically be indicated on a numerical scale. For example a P1 would be the highest priority and maybe a P4 is the lowest in some organisations. So using this as an example, a P1 would be considered a ‘Showstopper’, a P2 would be considered a ‘High’, a P3 would be a ‘Normal’, and a P4 would be a ‘Low’ priority defect. Then you’d have another scale for severity levels (you get the idea).

So you have a bunch of open defects. Now what? Well, you’re already actively communicating these to the team and have these visible on your team whiteboard or within your defect management system (a.k.a bug tracking tool). Great. However, as another old saying goes, ‘you can take a horse to water but you can’t make it drink’. You need to be ensuring open defects are being ‘managed’. There is sometimes a danger of allowing these to pile up and I personally like to keep on top of defects before they start to get on top of you. One such method is by scheduling a ‘defect/bug triage’ session. These don’t have to occur on a daily basis but use your judgement and get people together as and when you feel is necessary. Maybe have representatives from different areas of the team present e.g. Product, Dev, UX&D. Start by ordering your open defects in severity order and go through each one (which does not already have a priority first and foremost) and strive to gain agreements as to whether these should remain open or not. If they are to remain open, assign a priority and agree what are the next steps. Maybe something needs testing further to aid decision making or maybe there is enough detail for a developer to investigate and hopefully go away and fix, etc.

Now you should have a distilled set of open defects. They’ve been prioritised. You can smell the fixes in the air. Subsequently, the defects which have now been fixed should have their status set to reflect they’re ready for retest. In an ideal world, the fix will be successful. Your retest has passed and the defect can now be closed. Often, defects will not always pass retest the first time round. They get reopened and the developer needs to take another look. There has been times when despite a concerted effort to fix a defect, the problem refuses to go away. This would be a perfect time for another triage session. Explore alternatives for example, maybe there is a workaround for the user or maybe a minor design change could render the problem null and void. Or perhaps, the effort to fix something far outweighs the value, and so you may end up agreeing to mark something as a ‘Will Not Fix’. This happens from time to time. At least you (or rather the team) tried eh?

Defects can be valid and sometimes they can be invalid. Maybe the tester has misunderstood the expected result or maybe the requirements have changed but nobody thought to tell the tester who raised the defect. This can happen. In which case these defects are marked as ‘Invalid’. No big deal.

So a defect can be open, it can be invalid, it can be fixed and ready to retest, it can be closed, it can be reopened, or it can simply be a will not fix.

What about blocking defects I hear you cry. There can be defects which prevent any further testing activity taking place and are then considered to be blocking the test team. You may also find you are blocked on being able to fix a defect and have to wait for something else e.g. a third-party component being updated before it can be retested.

How old is the defect? I’ve seen instances of defect backlogs being allowed to accumulate lot’s of ageing open defects. This bothers my OCD somewhat. I like things to be kept clean and tidy. I would argue anything older than, say, 6 months (just a ball park figure) should be closed off since you need to ask yourself whether you’re ever going to realistically fix them.

I think of defects as a form of currency. Granted, I probably wouldn’t be able to buy anything with them but they are valuable nonetheless. By surfacing these, informing others that they exist, by fixing them and slowly but surely building confidence in your product or system – can you really put a price on that?

 

 

The Old Regression Two Step

The Old Regression Two Step

Welcome to my sixth blog post. If you’ve got this far and have been reading my previous blogs with intent, I want to thank you for sticking with me.

Slow, slow, quick, quick, slow. Which for those folk dancers among us will be all too familiar with. Not to be confused with the hardcore dance variant you understand.

Testers, or better still a folk dancing tester will have undoubtedly experienced working within several agile sprints. Innovating incrementally. Delivering working software to the audience on a regular basis. Depending on your ‘vertical slicing’ (sigh – you know – using your invisible cake slicer), then some releases will contain a seemingly never-ending series of sprints, with some feeling like they’re never going to end. Particularly, when you are constantly rolling over tickets into the next sprint. Yet the word ‘sprint’ makes me think ‘fast’ or to be at a heightened ‘pace’. 0-60mph in 3.4secs. Daley Thompson realising he’s left the gas on. You get the idea.

Some sprints can run like clockwork. Others, not so much. Some can be more arduous (sigh – or ‘challenging’ if you want to put a positive spin on this) and feel painfully slow. You know what I’m talking about – right? So we testers are anxiously wanting to forge ahead and get on with testing new and exciting features, etc but we’re being blocked by something. Maybe a key dependency is still outstanding or somebody somewhere is proverbially dragging their heels and we’re waiting on them to finish something before we can proceed. Or we can test aspects of ‘x’, but ‘y’ and ‘z’ are still being developed. Urgh. So by the time we get around to testing ‘y’ and ‘z’ does this mean our test effort for ‘x’ will have been invalidated and we need to test this again. Or maybe you tested ‘x’ when you were told it was ready to test and later find out (usually at the eleventh hour), that the requirements changed or a last-minute UI change was made and it will need testing again. Around and around we go.

I’ve never been to a dance class myself but ‘slow, slow, quick, quick, slow’ does remind me of testing software. You have a few sprints which take time and effort with one thing and another…slow, slow…then all of a sudden there’s a big push for testing to be completed, including a comprehensive round of regression at the end and we’re expected to find every single bug in there in double-quick time before a deadline which has got to be hit no matter what…quick, quick…so we release into live and we’re back down to slow again. And breath. Before we do it all over again. Sound familiar?

I’m not a control freak per se but testing as an activity needs careful planning and control. You cannot predict the future and so with the best will in the world, you are going to need to implement measures of control throughout the entire software development process. Think of a ship at sea and a storm hits. You’re still going to need to ensure the ship maintains it’s course and heading as much as possible and ultimately reaches the desired destination in a timely manner. Whatever you do, don’t go under. That would be bad.

Moreover, planning need not be a document heavy or time-consuming process nor should it be. Being ‘agile’ does not mean you can simply dispose with test planning. That would be just silly. Take the time to set expectations with the rest of the team. Ensure you have made it clear what needs to be tested, why it needs to be tested, outline your dependencies, give an indication of how much resource you will need to complete this test effort and estimate how long all of this is going to take.

Influencing skills are very important to affect positive change or even just to gain ‘buy in’ from the rest of the team. The more you open up the mystery that is the world of software testing, the more the team will understand and empathise with your situation. You may even start to hear team members saying “well, we still need to do this and we also need to think about how this affects the test team” or “we’ll need to ensure this is fully done ahead of starting test execution so they will be testing what we intend to ship“, etc. Pretty amazing eh?

If all of this helps to avoid the quick, quick mad rush at the end and avoids the regression testing window being squeezed then surely this is a good thing. It might not look as good on the dance floor but the test manager will hopefully applaud your choreography. Encore! Encore!