Happy New Year! It’s 2018 and I felt the urge to start the year as I mean to go on and blog about, you know, software testing.
Increasingly, for better or worse, the world is obsessed by automation. When I say the world I’m probably exaggerating. I mean, my Mum probably couldn’t give two hoots about it but there you go.
I’ve been giving some serious thought about test automation and where it may prove useful, and, when used in a timely, strategic, considered fashion and after x hours of initial investment to setup, it can eventually become a wonderful, wonderful thing. Speaking of wonders of the world, how about those pyramids eh? The Great Pyramid at Giza is the only one of the seven wonders that is still standing today. Amazing. That must of taken a lot of effort to construct. So with that somewhat tenuous link in mind I’d like to refer you to another (albeit unofficial), wonder of the software testing world, the agile testing pyramid (probably built by Mike Cohn).
Imagine if we had a waterfall-esque type pyramid but it’s upside down with the sharp pointy bit at the bottom. Careful! It may topple over so don’t get too close (watch those toes if you’re wearing sandals). In this topsy turvy world we find little in the way of Unit Tests being run at the bottom (if any ~ crazy right?). Moving upwards we have a middle tier which, perhaps could include some level of automation, say to test services for example. At the top we have an unwieldy, great swathe of UI tests that need to be run (and let’s assume usually towards the end of the development phase). It’s usually at this level you start to flush out defects and if you find too many they could serve to, at best lead to lot’s of duct tape being used to patch things up or worse, torpedo the release altogether. Imagine those stakeholder (pharaohs) faces. Argh we’re all drowning in technical debt (sinking sand). All in all, this is a means of flushing out defects but regrettably some may have been realised too late in the day. If only we could have uncovered some of them sooner.
Let’s build the agile testing pyramid and see how that shapes up. We’ll have the sharp pointy bit at the top this time. We need a good, solid foundation at the bottom. Let’s have a nice suite of Unit Tests here and we’ll ensure these are run against every new build generated. A great way of approaching these is to follow a Test Driven Development (TDD) style. The clue is in the name of course, start with a failing test and write the code necessary to make this pass. Refactor where appropriate and move on ensuring each new test doesn’t break any previous tests.
As we move towards the middle tier then here we could have far more automated tests but at an integrated / story level perhaps. Some people refer to these as automated acceptance tests or integration tests. You may of course decide to test against an API directly (if there is one). I think of this middle tier as the logical layer and if your tests are passing here and regularly passing as part of routine regression testing then you should have a warm fuzzy feeling inside. If they don’t, then please fix them. Fix them now!
Importantly, I need to remind you the middle tier sits below the UI level which has been pushed to the sharp pointy bit at the top. Of course UI is important. It is user facing, etc but you’d be unwise to solely focus your automation efforts here (but nobody is saying you can’t automate the UI). However, it’s often somewhat brittle, since UI changes frequently for example and you may find yourself constantly picking your automated tests back up after they’ve fallen over. Maintenance could become a burden as any changes in UI would necessitate changes in your tests. Perhaps if we were to have more automated tests running below the UI level then we’d probably find verifying the logic itself much more valuable and arguably (hopefully) it’d be less likely to cause tests to fall over for no good reason, alleviating maintenance and galvanising all important trust in your automated test results.
Supplement your automated tests with manual exploratory testing. Sometimes, you may want to hold back from writing your automated acceptance tests until the code is mature enough (or until you’ve agreed on the desired behaviour). Automation, as we know, is not infallible so remember to include the human element to give you the confidence the software is behaving as expected. That’s not to say there’s no human element with automation, since the tests themselves are not going to write themselves and a responsible tester will become familiar with their level of coverage (collaborating with developers) and help guide the creation of new tests being written.
The test landscape has changed people. Shifting sands if you will. Remember, developers are better placed to write code but as testers it’s going to be important to rally the team and reinforce the belief that defect prevention is better than cure. Try to push ‘quality’ all the way through development.