Put that in your Pipeline and Test it baby

Put that in your Pipeline and Test it baby

Hello there. It’s been too long hasn’t it?

Nearly everyone seems to want shorter release cycles.

In a somewhat rare flash of inspiration, I wanted to blog about pipelines. Specifically, delivery and deployment pipelines. I guess you may only have a singular pipeline or perhaps multiple pipelines running in parallel.

You’ve more than likely heard of phrases such as ‘Continuous Integration’, ‘Continuous Delivery’, and ‘Continuous Deployment’. Wait, aren’t delivery/deployment the same thing? Not necessarily.

If I cast my mind back to a previous blog post Walk Like an Agile Egyptian Tester then I ended on the notion of pushing quality all the way through development. This alludes to having different suites of tests being triggered to run throughout the different stages of the development process. Think of a build as something of an artefact which you, I guess, nurture (a bit like a baby) as it steadily develops and grows until such time it goes out into the big wide world. If it’s a particularly well behaved and healthy baby then this would be a pain-free and seamless process. However, in my experience babies tend to need lot’s of love and attention.

Continuous Delivery and Continuous Deployment share many traits with one key difference and this is usually found towards the end of the development process. Think of the Push versus Pull model. In a Continuous Delivery world, yes we want code to always be in a deployable state but the final trigger is ultimately your decision. So you (as in a person) will decide when to PULL the artefact into production. However, in a Continuous Deployment world the final trigger is a PUSH. It is auto-magically deployed into production without any human intervention. In a well established pipeline this would be perfectly normal and nothing to write home about. Almost boring as it happens so often. Just another day.

So your pipeline has an artefact which starts life in it’s infancy, born from your beloved source code, it’s initially subjected to lot’s of unit tests following a commit. The commit itself could kick-start the process of compiling, running tests and performing code analysis. If there’s something wrong, fix it immediately (we don’t want a crying baby do we). If everything is passing, great! Your artefact can then be moved into your artefact repository, ready for the next stage of the process e.g. integration, etc.

I should say that no two pipelines are probably the same (well, they could be I suppose) but the point is it’s down to you and your team to agree on what this looks like.

Your artefact could go through any number of pipeline stages, being subjected to different levels of testing, passing these and being promoted (automatically pushed or manually pulled) to the next and so on. All the while edging closer to production readiness. I tend to think of these stages as feedback opportunities and your different levels of testing as proverbial safety nets, which in theory should catch issues as and when they arise. However, as your tests run in environments which start to closer resemble live they tend to take longer to run which means the rate of feedback starts to slowly decrease the closer you get to release. To put this another way, Unit Tests should run fast (or as fast as possible) giving rise to fast feedback while End to end tests usually take longer (much longer) to execute and so you’ll have to wait longer to find out if everything is behaving as it should.

Other considerations include version control (very important) and environment setup (sometimes easier said than done). CI encourages integrating code early and often so you’re going to need to have a handle on versioning code as well as versioning of the CI machinery you are relying it to run upon.

In an ideal situation promoted artefacts trigger a medley of automated tasks to soothe some of this pain. Maybe you have a bunch of different environments and wish to quickly spin something up and have this configured to run your tests. Programmatically configuring machines to run tests upon is an aspect of treating infrastructure as code. These can be built from the same version controlled source code which enforces consistency. There are lot’s of tools in this space e.g. Chef, Puppet, etc. These help with configuration, provisioning and monitoring.

Another benefit of having the capability of these self-service push button deployments is that it could also offer a means of performing a demo for team members. Another great opportunity to gain feedback.

Finally, it wouldn’t be one of my blogs without addressing Automation-in-Testing and those tests which require humans to run them (as much as I hate the term ‘manually‘). The notion of a delivery pipeline doesn’t mean this is exclusively for automated based tests but can also lend itself nicely to human based test effort. For example, you may be progressively writing automated tests and still need to run some of these ‘manually‘ in which case a push mechanism might not be the best solution. Instead, pulling artefacts in may prove to be a better option at the appropriate time e.g. if an artefact is severely borked then you’d be as well ceasing manual test effort and pull in a shiny new one with fixes in (no sense flogging a dead horse).

Continuous Integration. Continuous Testing. Continuous Delivery/Deployment. Continue to have the latest working code available and made visible to everyone. This should go some way to avoiding having to throw the ‘baby’ (see what I did there) out with the bath water when things don’t go according to plan.

Advertisements

Walk Like an Egyptian Agile Tester

Walk Like an Egyptian Agile Tester

Happy New Year! It’s 2018 and I felt the urge to start the year as I mean to go on and blog about, you know, software testing.

Increasingly, for better or worse, the world is obsessed by automation. When I say the world I’m probably exaggerating. I mean, my Mum probably couldn’t give two hoots about it but there you go.

I’ve been giving some serious thought about test automation and where it may prove useful, and, when used in a timely, strategic, considered fashion and after x hours of initial investment to setup,  it can eventually become a wonderful, wonderful thing. Speaking of wonders of the world, how about those pyramids eh? The Great Pyramid at Giza is the only one of the seven wonders that is still standing today. Amazing. That must of taken a lot of effort to construct. So with that somewhat tenuous link in mind I’d like to refer you to another (albeit unofficial), wonder of the software testing world, the agile testing pyramid (probably built by Mike Cohn).

Imagine if we had a waterfall-esque type pyramid but it’s upside down with the sharp pointy bit at the bottom. Careful! It may topple over so don’t get too close (watch those toes if you’re wearing sandals). In this topsy turvy world we find little in the way of Unit Tests being run at the bottom (if any ~ crazy right?). Moving upwards we have a middle tier which, perhaps could include some level of automation, say to test services for example. At the top we have an unwieldy, great swathe of UI tests that need to be run (and let’s assume usually towards the end of the development phase). It’s usually at this level you start to flush out defects and if you find too many they could serve to, at best lead to lot’s of duct tape being used to patch things up or worse, torpedo the release altogether. Imagine those stakeholder (pharaohs) faces. Argh we’re all drowning in technical debt (sinking sand). All in all, this is a means of flushing out defects but regrettably some may have been realised too late in the day. If only we could have uncovered some of them sooner.

Let’s build the agile testing pyramid and see how that shapes up. We’ll have the sharp pointy bit at the top this time. We need a good, solid foundation at the bottom. Let’s have a nice suite of Unit Tests here and we’ll ensure these are run against every new build generated. A great way of approaching these is to follow a Test Driven Development (TDD) style. The clue is in the name of course, start with a failing test and write the code necessary to make this pass. Refactor where appropriate and move on ensuring each new test doesn’t break any previous tests.

As we move towards the middle tier then here we could have far more automated tests but at an integrated / story level perhaps. Some people refer to these as automated acceptance tests or integration tests. You may of course decide to test against an API directly (if there is one). I think of this middle tier as the logical layer and if your tests are passing here and regularly passing as part of routine regression testing then you should have a warm fuzzy feeling inside. If they don’t, then please fix them. Fix them now!

Importantly, I need to remind you the middle tier sits below the UI level which has been pushed to the sharp pointy bit at the top. Of course UI is important. It is user facing, etc but you’d be unwise to solely focus your automation efforts here (but nobody is saying you can’t automate the UI). However, it’s often somewhat brittle, since UI changes frequently for example and you may find yourself constantly picking your automated tests back up after they’ve fallen over. Maintenance could become a burden as any changes in UI would necessitate changes in your tests. Perhaps if we were to have more automated tests running below the UI level then we’d probably find verifying the logic itself much more valuable and arguably (hopefully) it’d be less likely to cause tests to fall over for no good reason, alleviating maintenance and galvanising all important trust in your automated test results.

Supplement your automated tests with manual exploratory testing. Sometimes, you may want to hold back from writing your automated acceptance tests until the code is mature enough (or until you’ve agreed on the desired behaviour). Automation, as we know, is not infallible so remember to include the human element to give you the confidence the software is behaving as expected. That’s not to say there’s no human element with automation, since the tests themselves are not going to write themselves and a responsible tester will become familiar with their level of coverage (collaborating with developers) and help guide the creation of new tests being written.

The test landscape has changed people. Shifting sands if you will. Remember, developers are better placed to write code but as testers it’s going to be important to rally the team and reinforce the belief that defect prevention is better than cure. Try to push ‘quality’ all the way through development.

Can you program a Manual Tester?

Can you program a Manual Tester?

I’ve blogged about test automation before but I never touched on exactly how this is put into action. I’ll get out of jail early and say this is all based on what I’ve seen/heard/experienced and it won’t be the same for everyone.

As I’ve said before I was given a choice back in the day – did I want to become a developer or a tester – it never really occurred to me that you could be both (in a manner of speaking). The illusive hybrid individual I refer to can be called all sorts of things out there in the big wide world e.g. a Developer-in-Test, an Automation Tester, QA Automation Engineer, etc. Though these folks usually only ever get involved in or get swept up in all things test automation whether setting up frameworks, writing tests, maintaining the environment, etc. It can quickly become all consuming. Many companies seem to be recruiting heavily for these roles of late (let’s call them Unicorns) but you have to wonder whether their alleged needs are at best short-sighted and worse, run the risk of diluting the true skill of manual testing. Like I’ve said before, a healthy mix of both manual & automation can be fairly powerful when done right.

Dependent on the particular testers background, then they may have had prior experience of writing code. This would (I guess) make the transition over to being able to write automated tests that bit easier. Since from what I’ve seen automated tests often require the ability to write code. Moreover, in addition to writing code, there needs to be an appreciation for knowing whether the code is doing what you want it to do. Is it actually testing whatever it is you’re wishing to test. The word ‘assertion’ is often used to describe this aspect of behaviour. Plus, has the automated test been written as well as it could be written? Will it run fast? Will it garner reliable results? Will you trust those results?

But hang on a tick. What if your experienced, highly skilled, highly motivated, love-what-they-do testers, defect finding geniuses don’t come from a background ridden with setting variables, writing methods and asserting x, y, and z? Are these (and I hate to christen them) ‘manual testers’ now dinosaurs roaming the earth in search of coffee? I think not.

Companies can come to many realisations. Such as ‘wow’ this automation testing business is more trouble than it’s worth and back out of it completely. Others, may come to realise they initially went with the wrong tooling or wrong strategy. Nothing wrong with that I guess though it could become a costly mistake – but here’s the thing – let’s say you have a bunch of great ‘old fashioned’ and ‘wise’ manual testers who really know their onions. You also have equally great developers. You may have the budget to ‘recruit’ in a Unicorn-in-Test to come and miraculously implement all your automation needs (*your sarcasm radar should be bleeping by now*) but sooner or later there’ll be a stumbling block owing to your manual testers being unable to write automated tests, read and understand automated tests, maintain and trust automated tests.

So what do you do? I’ve yet to read anything online about the definitive answer to this dilemma. Some may have a genuine desire to learn how to code – but learn which language, do they learn in work time, how will they learn, who will support them, etc. Some may have dabbled in writing code and want to learn more. Others might not wish to have to re-train to become a developer. Some might feel pressurised to learn but have no idea where to start. Maybe it might be better to have your manual testers pair with developers and create them together. The developer can do what they do best – write code – whilst at the same time the tester can offer an insight into what needs to be tested, keeping a close eye on coverage and regularly assessing risk.

It’s a tough one. I get inundated with recruiters contacting me and several have suggested companies ask for heaven and the earth without really thinking about it. If a job description comes across as overly automation centric then they may be shooting themselves in the foot and inadvertently losing out from hiring truly great testers who are likely to shy away in fear of not ticking all the many (seemingly never ending) desired skills checkboxes.

Ultimately, it should be a team effort and companies should never really expect to have a single Unicorn-in-Test come in and do everything single handedly. Nor should companies under-estimate the very real threat of focussing too much on automation at the expense of manual testing.

Personally, I hate having the word ‘manual’ as a pre-fix. ‘Manual’ testing is so much more than a ‘manual’ tester manually running a test. Long may they roam planet Earth!

The Curious Case of the Test Case

The Curious Case of the Test Case

Welcome to my eleventh blog post. I’d like to turn our attention to test cases for a moment. Of late I’ve both read and heard many people within and outside the test community ponder whether these are in fact required (in this context, think ‘manual’). Well, this is my personal blog and I’ll give my answer to you straight. Yes. Of course you do. Why? Allow me to explain.

At the start of the year I joined a new team. Like all new teams, there are established ‘artefacts’ that routinely need to be tested. I could have said components. I could have said products. I could have said systems. Whatever. Let’s generically refer to items under test here as an artefact.

These could be legacy, well established artefacts that continue to be maintained/improved upon. There maybe new versions to be upgraded to, which, in turn may bring new features along with it. If you’re lucky the shiny new version may even address a few niggling long-standing bugs that have remained open for longer than originally anticipated.

Equally, you might be asked to test something completely different. Something entirely new. Something of which you’ve never clapped eyes on before.

Now here’s the rub. As a new team member there were several things I hadn’t clapped eyes on before. Whether it was something which had existed for a while or not. It may as well have been brand new to me. Since, as far as I was concerned, it was. What is it? What does it do? How do we go about verifying it’s behaviour? Are there any pre-requisites as far as the test environment or test data for example are concerned? I had lot’s of questions.

In fairness, the team have been great. Very supportive and parachuting in has given rise to plenty of verbal communication. However, we have a job to do. There’s work to be delivered. Time is always of the essence.

Thankfully, when it comes around to testing several of these artefacts, particularly when performing regression testing (since many of these have been around well before my arrival), I have the benefit of being able to reference their respective test suite(s) within our Test Management tool. This has proven to be an invaluable resource. These tests have allowed myself to become familiar with expected behaviours for a variety of scenarios and have provided a practical opportunity to exercise these tests with minimal support. For want of a better turn of phrase, they’ve enabled me to hit the ground running.

Before we go any further I am not saying you must always have a test case with definitive steps to follow. I’ve found a successful test strategy employs a proverbial smorgasbord of different elements. There has been many a time I’ve found a bug that was never originally conceived at requirement gathering stage or has been found as a direct result of running a specific test case. That said, I’ve also identified many bugs as a direct consequence of executing a set of test cases – I often find these either fail outright or you find something out of the ordinary as a consequence of running a particular test case. The test happened to have instructed you to perform x, y, and z which as it happens passes but you also found something unexpected happens if you slightly modify the steps. In which case, you might have been the first to realise this happened and you might choose to update the test case so this would always stand to be caught by others running the same test again in future.

I mentioned ‘others’ there. By others I mean your fellow compadres, comrades, colleagues, team members. If left to our own devices and regression test an artefact in an exploratory manner, there’s a chance that your test approach may fall foul of inconsistent test coverage. Somebody might simply forget to regression test a particular scenario or aspect of functionality. For my money, if you are at least executing the same set of test cases then this would go a long way to mitigate any such gaps in your coverage. Though, exploratory testing is very powerful. You or somebody may find something worth fixing as a direct consequence of not following any pre-defined steps. In my experience you need both.

I’ll be coming on to granularity next but just to recap – I would recommend you look to maintain a test suite but this must not blind you to thinking all you have to do is to run those tests and bingo. Testing as we all know is an art form. A skilled tester will always be on the look out for other scenarios to explore, etc along the way as part of their test execution cycle. The balancing act is ensuring you verify the high value stuff first and foremost. Don’t waste too much time and disappear down a rabbit hole trying to break something and supposing you do, there’s a real chance the steps to reproduce are so far out there on the edge of the universe that nobody cares.

So granularity then. I’ve opened some test cases, looked at the convoluted pre-requisites, glanced at the one billion steps, the way it’s written, language used, etc and sighed for about ten minutes. Not a good sign. Test cases ought to be easy to digest. Easy to understand. Preferably easy to run but that’s always in the eye of the beholder (only easy if you know how but a well written test case should make it easy to learn from). Then I’ve had test cases at the other end of the spectrum that literally contain one step or duplicate something you already did in the previous test case. It can be infuriating. So if you are not careful you could have a test suite which has a few enormous test cases within which take forever to plough through or have a test suite which has hundreds of tiny ‘baby’ test cases which only serve to inflate the number of individual tests needed to be run and also throw up challenges for how to maintain them.

Stop and think before putting your test suite together. Can you afford to consolidate a few tests where it makes sense to do so? Remember you can still do this and test exactly the same sorts of things that’d normally be executed individually but you’d stand to verify these in a much more efficient manner. This in turn makes it easier to manage from a maintenance perspective e.g. you may only need to edit one or two test cases as opposed to several. Care needs to be given to not end up with too much in a single test. Use your judgement and common sense when deciding.

Personally I like using bullet points. These have a nice way of making it clear what to do without feeling like you are shackled to them. You should still have that sense of freedom to explore further. Add notes of interest. Any ‘gotchas’ to be on the look out for. Nothing must detract from the whole point of running the test in the first place. It must verify expected behaviour and flush out as many issues as it possibly can. Some of these might require fixing. Some might not. Surface the information. Make your recommendations known.

Lastly, I find using test cases are a good way of radiating to the wider team what we intend to cover as part of our regression test effort. It’s proven to be of enormous interest to the development team to have visibility of this and offer their support when things invariably change and we are all looking to mitigate any perceived risk before release.

 

Curiosity Killed the Tester

Curiosity Killed the Tester

Hello again. I’ve been unusually busy these past few months plus I’ve been waiting for inspiration to strike and boy oh boy it certainly did.

It won’t be the first time you’ll have read something about this but I wanted to share my take on it for you – intrigued? Read on.

So. Testing eh? We love it. I know fellow testers love it. We understand it. We continue to learn about it. We strive to improve our approaches. We read books. We attend seminars and conferences and such like. We want to make the world a better place.

These days, and this is a fairly broad interpretation and heavily context dependent, a concrete set of requirements, a clearly defined set of acceptance criteria, and a wholly unambiguous, complete understanding of exactly what to expect as far as behaviour is concerned (that’s software behaviour) is something only dreams are made of.

While this maybe true and even if it isn’t testers should never ever be afraid of asking questions. Thinking back to my days as a trainee, it was always drummed into us that ‘testers never assume’. This is not an excuse to flick your common-sense switch off however.

With the Agile manifesto ringing loudly in everybody’s ears nowadays, individuals and interactions are the order of the day. Talk to people. Talk to yourself (don’t go mad though). Get up from that coffee cup strewn desk of yours and wander over to speak to a fellow colleague. Ask that question. Collaborate. Those that will remember Bob Hoskins will recall his BT advert catchphrase “It’s good to talk.”

I like talking. I was always told off at school for talking too much in fact. I soon learned I needed to concentrate whilst working so I left my talking for later. As a tester (fast forwarding many years forward) I still need to concentrate when working on something. As much as I’d love to talk about last nights “I’m a Celebrity Get Me Out Of Here” I have a job to do and invariably a deadline to hit.

However, like we were saying earlier testers need to ask those questions. Sending an e-mail can sometimes be necessary but when the person you need to ask is sat in front of you (or next to you, or behind you) go and build those relationships and speak to them. Testing is a service after all. You’d be demonstrating to colleagues how passionate you feel about the work you do. Though I have to say not every team member understands Testing. I guess I don’t know everything about Business Analysis, Development, Product Management, etc though when somebody from a different discipline asks me a question I respectively and politely try to provide an answer. I want to help. We’re all trying to deliver aren’t we? Since I understand that they are a professional at what they do. The same as I am a professional at what I do. There’s a mutual respect (or should be). Moreover it’s just nice to be nice – isn’t it?

Unfortunately not everybody in a team will be willing to just be nice (maybe they’re having a bad day) or appreciate the fact that your question is in relation to the work that you are doing (since they are probably oblivious to why it’s relevant or just fail to show any willingness to want to understand why you are asking the question) and may decide to just give you the bum’s rush and send you on your way with a flea in your ear.

Bum’s rush? Flea in your ear? To give you the brush off. To not be helpful or understanding. To do or say anything to get rid of you. This flies in the face of team working. Not only will this serve to make you think twice about approaching them in the future, it is sowing the seeds for a total communication breakdown and lack of team spirit.

A classic example for you. You’re in the middle of test execution. Maybe days, weeks, down the line. Then all of a sudden you notice something different. You haven’t seen this behaviour before. Why is this happening? There could be a host of reasons. So like any good Vulcan, you apply logic. Maybe something has changed within the code base? Let’s face it how often have you had to deal with ‘changes’ you were not made aware of in the middle of your testing owing to non-existent release notes. So who should you ask? Maybe the Lead Developer but they’re not around. You see the Project Manager sat in front of you. So you wander over and ask in person. Sounds like a sensible course of action.

What if the PM immediately replies with “It’s not a defect!” in a less than friendly tone? Yet you never said it was. Since at this point in time you’re trying to establish expected behaviour and determine whether or not a change had been implemented in a recent build or not. “Why?” “What difference does it make?” the PM may ask. Failing to understand the testing implications of a potential code change that we were not made aware of. You wander back to your desk bemused.

The example above could have been so different had the PM handled it differently. The response could have been “Oh that’s odd I wasn’t aware of any changes for this. I’ll check for you.” In turn you would have thanked them for their help and continued with your investigation into other possible causes.

Testers need a thick-skin. A level of resilience. It can be so frustrating.

So yes, absolutely, speak to people. Ask those questions. Just be mindful of the fact that not everybody understands or wants to understand why you are asking.

I’ll leave you with something else we learned at school…play nicely.

 

 

 

 

 

 

 

 

Mobile Defects Ring a Bell

Mobile Defects Ring a Bell

Hello! It’s been a while. I’ve been rather busy of late but whilst I’m waiting for the next piece of code to break, I thought I’d revisit my software testing blog.

It has recently struck me that I’ve been testing within the mobile arena for the last 5 and a half years. Don’t ask me where time goes but they do say it flies when you’re having fun and I must say I continue to enjoy mobile testing.

During this time I’ve tested many, many, wonderful (and not so wonderful) native mobile applications across iOS, Android and Fire OS. I’ve also tested mobile web offerings as well to include HTML5 based games most recently.

Invariably, a solid test approach will include testing across different devices, running different versions of operating system, featuring different screen densities, different methods of connectivity (Wi-Fi, Cellular, Offline, AirPlane Mode), different chipsets, navigation, looking at installation and upgrade paths, permissions, saving data, storage, security (e.g. DRM downloads), changing orientation, multi-tasking, controls, gestures, interrupt handling, performance, stability, external playback support (e.g. AirPlay, Chromecast), launching, relaunching, accessibility support, usability, layouts, statistics, gameplay, etc. The list goes on.

Despite all these different factors to consider when performing your mobile testing, a mobile related defect can be distilled to include a common set of criteria. I often champion the use of a defect template in order to promote ease of use and consistency. There is nothing worse when triaging defects to find a whole swathe of information (often crucial information) missing. When people start to assume which build version has been used or confusing the steps to reproduce for example this can generally cause lot’s of mis-understandings (Chinese whispers – not sure if this is politically correct or not) as well as ambiguity. So it’s better to get all the relevant information captured carefully balancing the needs of being detailed, informative and as concise as possible.

So let’s try to list some of the useful fields to include within a mobile related defect.

  • Title – I like so follow some sort of naming convention. For example, if it’s an issue concerning a crash I’ll pre-fix the title with ‘Crash – …’
  • Label – This depends on your defect management tool weapon of choice but classifying defects by either pre-fixing the title and/or adding a meaningful label may help when revisiting the particulars in future or when reporting. Again, with a crash you might label them with ‘app-crash’ and these can be reported on for stakeholders.
  • Device – Self explanatory but record which model & generation of model you used e.g. iPad 2
  • Operating System – Record the name & version of O/S you used e.g. iOS 9.3.2
  • Build Version – Record the version of build you have tested. This is vital.
  • Connectivity – Wi-Fi SSID? Cellular connection? Offline? AirPlane Mode enabled? Transition from one to the other?
  • Steps to reproduce – I like to sequentially number each step. The reader should be able to understand exactly what you did (that’s assuming you even know what you did!).
  • Expected Result – Try to avoid saying it should do this, etc. Say it WILL do something. Expected behaviour should be clearly defined and understood.
  • Actual Result – Your big chance to explain what the hell happened. Try to include whether you could recover functionality and how or whether you crashed and burned.
  • Frequency – Does this happen every single time? Or is it once in a blue moon? Indicating how prevalent something is often has an influence on the likelihood of developer time being allocated to investigate and/or fix a probem.
  • Live Y/N – Another useful one. Is this already in live? If yes, then has the audience been making any complaints about this? If not then it’s likely to be a lesser priority to fix if at all.
  • Screenshots – OMG. A picture can say a thousand words. If you have the ablity to provide exhibit A and evidence something has screwed up then it will not only serve to prove you haven’t lost your mind and are talking cobblers but it often cements the nature of a problem (particularly with stakeholders) and can influence decision making e.g. That looks terrible. We need to fix this.
  • Video – Sometimes it isn’t always possible to illustrate something by way of a single screenshot e.g. Video playback is juddering, etc. In these instances consider providing a short video (remember to clean your fingernails beforehand).

Right, well the above should serve you and your team well. It’s not exhaustive obviously but if you start to routinely provide the above each time you capture a defect then it will pay dividends. Trust me. We’re all human (unless you’re some cyborg tester sent from the future) in which case you can’t remember everything. Not everybody can be co-located together so logging these on a shared system helps visibility. Moreover, the number of times somebody has come to me asking questions about an issue I raised weeks, if not months ago is reason alone to have something to refer to or else you’ll find you simply cannot remember all the relevant details….now….erm….what was I saying?

I’m experienced enough to realise you needn’t be overly process driven so perhaps consider only documenting things as and when appropriate. Not withstanding the merits of verbally discussing an observation with a member of the team. Sometimes a defect can be turned around much faster by having a conversation.

 

 

 

Back to the Bug

Back to the Bug

I’m going for as many blog posts as I can muster whilst I have some time to spare. My idea for this post is nothing new but it’s one I can relate to. I’d love to have been Marty McFly (or maybe Biff Tannen sometimes as he had a cool car) but testing and time travel can share some commonalities. For example, have you ever felt like you’ve had the same conversation amongst your team or maybe the same conversation across different software development teams? Or perhaps you’ve seen a defect before or the symptoms are scarily similar to one you have encountered before – can you remember what the solution was? What was the cause? Did you spot this again by sheer coincidence or have you put preventative measures in place to ensure a particularly nasty looking bug would be identified should it ever come back to life?

Yes, that’s right ladies and gerbils, welcome to the world of ‘Defect Prevention’. As much as we like seeing defects fixed and patting one another on the back for a job well done after release, have you taken some time to reflect on how these came to..ahem..pass (or fail would be more appropriate in this context). It’s not a witch hunt though. Nobody should be pointing fingers here. This is about discovering what led to the error being introduced and seeing if there are ways and means of mitigating such circumstances reoccurring in future.

Maybe it was human error. Somebody somewhere screwed up. Yep, it happens. Maybe it was genuinely something unforseen. Again, this happens. However, it is important to try to learn from these situations as best you can. For example, it could have been the case that you’d bitten off more than you can chew when sprint planning perhaps? Maybe you hadn’t accounted for resource being spread so thinly due to ‘noise’ and what little resource you had available were coding away like headless chickens and mistakes were made or there hadn’t been time to code review ahead of test. So perhaps think about ways of shielding your resource from such ‘noise’ and put measures in place to ensure time is allotted for code reviews taking place. Or you need to more closely assess levels of risk in future when tampering with the code base, particularly when there’s been a lot of refactoring going on in parallel to other ‘changes’. Maybe stripping out oodles of code mid-sprint wasn’t such a good idea after all and/or implementing a shed load of significant new features in one go wasn’t the best approach. Could you have done this at a more suitable time or at a more manageable velocity? Sometimes unforseen errors can happen when there are changes going on elsewhere (within the mystical back-end or higher up in the snowy regions of the stack) that you had not been made aware of. So communication problems (or just a general lack of it) can give rise to wheels coming off your product or system. Could you perhaps better manage or certainly look to improve this communication with your external dependencies in future? The list goes on..and on..and on.

I’m a big fan of continuous improvement. No, not because this often overused and underrated term just sounds good either. It’s just common sense. Retrospectives are one of the perfect times for development teams to come together and share meaningful feedback. No mud-slinging. No getting carried away with the jumbo pens and colourful post-it notes. Simply focusing on learning from your previous experiences, good and bad, is just sensible. Have those conversations with one another in a positive (hey, we’re all just trying to do a job here and work as part of a team) kind of way. This is much more likely to affect positive change. The flip side of this is to unwittingly encounter the same issues again and again. Around and around we go.

Back to the Bug though. That is the title of my blog post I guess. Say you’ve come across a real doozy. It turns out to make quite the impact. It’s a real big deal. One the whole team is going to have to deal with. Heads are being scratched but there is light at the end of the tunnel. Let’s try to make sure this sort of thing doesn’t bite us again in future should be something you and the rest of the team are thinking. Should this way of thinking only apply to the Showstoppers though? Well, I would like to think once you get into the habit of not re-inventing the wheel every time and not feeling like you are fire fighting to get across the line, it should become second nature. Often, something fairly quick and easy is all that is needed to save you having to suffer the same fate again.

I’m a realist though. You’re never going to get it right every time. Nor can you hand on heart say something won’t happen again. However, if it does and you have the solution readily available, then maybe it won’t bite you quite as hard this time around.

Remember that doozy of a bug we spoke about. Supposing that was only spotted at the eleventh hour. Could you maybe run a test to try to capture this particular problem earlier in future? You can try I guess. At least you’ve tried. Yoda would say “do or do not, there is no try” but we’re not going to mix Star Wars metaphors with Back to the Future are we. Scared? You will be..you will be.