Hello again. This is my fifth blog post and I wanted to talk about builds. A tester without a build is a bit like a demolition expert without something to demolish. Not that we are in the business of destruction of course, though I have been known to pull a functional specification apart with my bare hands. Metaphorically speaking of course. I wouldn’t want to risk cutting a finger on a sharp staple or anything. Afterall, we testers are only human and are not the robotic machines some would have us think we are. Beep. Whirr.

So where shall we start with builds then. Maybe the frequency of them would be worth a mention. Daily builds are a by-product of continuous integration I guess you could say. Particularly within an agile environment. I don’t necessarily have a problem with this providing each check-in is suitably tested to stand a fighting chance of identifying problems as early as possible. The worry would be if a flurry of code check-in’s caused a big old mess. Thankfully, it’s good practise to develop new features, architectural changes, etc on a separate ‘branch’ of code, rather than on ‘trunk’ or ‘master’ in Git parlance. Get it tested and working properly on branch to give you the confidence required ahead of merging. Version control systems are great. Not infallible, but still pretty neat.

As a tester, I am all too familiar with being very careful over which particular build version I am testing against. I cannot stress how important this is. You wouldn’t want to expend a whole bunch of time and effort testing away only to find you’ve been using the wrong build. If I had a penny for the times I’ve overheard a developer use an expletive when they have no sooner compiled and created a build but realised something was not included and so had to create another build version. If you’re very lucky, the developer will make this known to you. In other instances, you need to keep your eyes and ears open. Think Superman fine tuning his super hearing ability. So when testing, make it very clear which build version you are testing, if only to prompt somebody to tell you  whether this is in fact the correct one to use. I always record the build version I’ve tested when test reporting and always specify the build version when logging a bug report. This is bread and butter type stuff for any good tester.

Typically, with each passing sprint, a tester will verify a whole heap of features. Not too many mind but a reasonable amount to plough through for the given sprint window e.g. 2/3 weeks, with the intention of releasing working software on a regular basis. This will inevitably culminate in a shiny final build version or rather a ‘Release Candidate’. Something with which you’d be prepared to ship. But wait, perhaps we need to ensure this is going to behave as expected prior to sending this out to the masses. Maybe a code change here or a code change there along the way (all aboard the release train..choo choo!) has affected something which was previously working. This would be known as a regression defect and regression testing is something not to be taken lightly.

Stakeholders need a certain level of confidence prior to making a decision whether to release something into the live environment. For example, you may have a huge audience who have high expectations, since your product or system is renown for its reliability and performance. You have a brand reputation and an image to uphold. Maybe something going wrong in live will have monetary implications e.g. fines, etc. Maybe it’s something new to market and any failure would detract users from returning in future. In an ideal world, regression testing is something with which you perform as a matter of routine. This needn’t always be scripted of course, but I prefer a balance. So for example, I would verify a new feature is working first and foremost. This may entail testing a series of scenarios for that feature ensuring the acceptance criteria is satisfied. Then over and above the user journeys described, I’d start to think outside the box and test around the ‘edges’ trying to capture aspects of behaviour which haven’t necessarily been considered upfront. Finally, before closing the ticket, I’d spend a little time ensuring everything else is working as normal. Making sure pre-existing behaviour has been unaffected. Checking it is all still hanging together, etc. That sort of thing.

Circling back to the Release Candidate then that I mentioned before. You’ve tested all the new features and you’re reasonably happy everything is in good shape. The sensible option would be to then conduct a risk-based regression test execution phase against the Release Candidate build. Putting the cherry on top as it were before releasing to the masses. It’s good practise, to talk. Talk to your developers. Pair up if necessary. Maybe request some release notes from them or if these aren’t forthcoming (at least in written form), get together and pick their brains. Discuss the areas of code that have changed. Ask them where they feel they perceive to be risk around these code changes and the impact this could have on areas of behaviour (whether functional or non-functional). Make notes. Share your experiences and recall past defects or issues or problems you’ve faced previously. Is there any way these can be avoided and/or improved upon this time around in the interests of continuous improvement?

Having had these conversations and reading any supporting material, the tester should be able to formulate a plan of sorts. This all sounds terribly formal but it really is only as complicated as you and your team wish to make it. It’s worth spending time assessing perceived risk and understanding what needs to be tested before you even touch the release candidate. Perhaps, whilst this planning is underway, somebody else could be running some smoke tests against the RC just to uncover any obvious issues in parallel. So you have understood what your regression test scope is and everybody knows what they have to do.

The starting pistol fires and they’re off! The risk-based regression test execution phase is underway. Oh wait. You’ve found a problem. Looks serious. It could be a..gulp..showstopper. You demo the issue to the team. It looks to be reproducible. You have a definitive set of steps to reproduce. The PM looks worried (even more worried than usual). The product owner has a frown on their usually calm looking face. “We need to fix this guys.” they say. “This can’t go live.” Silence. The team now faces a dilemma of sorts. Do you throw on the brakes and stop regression testing in the knowledge there is a priority fix winging its way in another release candidate or do you continue with your regression testing on the off-chance something else is broken and requires fixing. Hmm. Experience has taught me that it very much depends on (i) whether you can continue testing at all and/or (ii) how much test execution is remaining. There will be times where you have to down tools and wait for the next rubbish candidate..sorry..release candidate to become available. Then there will be times when you may as well finish this round of regression testing to see what else you uncover with the knowledge that a further round of re-testing and regression testing will be necessary. Regardless, it is a team decision as you are all responsible for the quality of software you release. There has even been instances of pushing something to live with the knowledge something is broken with the intention of issuing a ‘patch fix’ shortly after. Not withstanding safety critical software perhaps. That would be a bad idea.

The retrospective should be the place to provide feedback and as a team come up with ways of avoiding such problems arising in future. Maybe include an automated/manual test to capture the showstopper defect earlier in future if this ever decides to return or at least to ensure this remains in a fixed state. Maybe look at ways of preventing such defects arising if this is at all possible. Could you have realistically caught this defect earlier? Sometimes this is easier said than done however, if you consider the defect in question may only have been introduced in the latter stages of the development cycle. Could have been human error or it could genuinely have been unforeseeable. This is why we test right!?

Whatever happens, you don’t want to fall into the seemingly never-ending loop of release candidate candidates. Been there. Done that. Not got a t-shirt mind you. Ooh now there’s an idea!

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s