Hello there. It’s been too long hasn’t it?

Nearly everyone seems to want shorter release cycles.

In a somewhat rare flash of inspiration, I wanted to blog about pipelines. Specifically, delivery and deployment pipelines. I guess you may only have a singular pipeline or perhaps multiple pipelines running in parallel.

You’ve more than likely heard of phrases such as ‘Continuous Integration’, ‘Continuous Delivery’, and ‘Continuous Deployment’. Wait, aren’t delivery/deployment the same thing? Not necessarily.

If I cast my mind back to a previous blog post Walk Like an Agile Egyptian Tester then I ended on the notion of pushing quality all the way through development. This alludes to having different suites of tests being triggered to run throughout the different stages of the development process. Think of a build as something of an artefact which you, I guess, nurture (a bit like a baby) as it steadily develops and grows until such time it goes out into the big wide world. If it’s a particularly well behaved and healthy baby then this would be a pain-free and seamless process. However, in my experience babies tend to need lot’s of love and attention.

Continuous Delivery and Continuous Deployment share many traits with one key difference and this is usually found towards the end of the development process. Think of the Push versus Pull model. In a Continuous Delivery world, yes we want code to always be in a deployable state but the final trigger is ultimately your decision. So you (as in a person) will decide when to PULL the artefact into production. However, in a Continuous Deployment world the final trigger is a PUSH. It is auto-magically deployed into production without any human intervention. In a well established pipeline this would be perfectly normal and nothing to write home about. Almost boring as it happens so often. Just another day.

So your pipeline has an artefact which starts life in it’s infancy, born from your beloved source code, it’s initially subjected to lot’s of unit tests following a commit. The commit itself could kick-start the process of compiling, running tests and performing code analysis. If there’s something wrong, fix it immediately (we don’t want a crying baby do we). If everything is passing, great! Your artefact can then be moved into your artefact repository, ready for the next stage of the process e.g. integration, etc.

I should say that no two pipelines are probably the same (well, they could be I suppose) but the point is it’s down to you and your team to agree on what this looks like.

Your artefact could go through any number of pipeline stages, being subjected to different levels of testing, passing these and being promoted (automatically pushed or manually pulled) to the next and so on. All the while edging closer to production readiness. I tend to think of these stages as feedback opportunities and your different levels of testing as proverbial safety nets, which in theory should catch issues as and when they arise. However, as your tests run in environments which start to closer resemble live they tend to take longer to run which means the rate of feedback starts to slowly decrease the closer you get to release. To put this another way, Unit Tests should run fast (or as fast as possible) giving rise to fast feedback while End to end tests usually take longer (much longer) to execute and so you’ll have to wait longer to find out if everything is behaving as it should.

Other considerations include version control (very important) and environment setup (sometimes easier said than done). CI encourages integrating code early and often so you’re going to need to have a handle on versioning code as well as versioning of the CI machinery you are relying it to run upon.

In an ideal situation promoted artefacts trigger a medley of automated tasks to soothe some of this pain. Maybe you have a bunch of different environments and wish to quickly spin something up and have this configured to run your tests. Programmatically configuring machines to run tests upon is an aspect of treating infrastructure as code. These can be built from the same version controlled source code which enforces consistency. There are lot’s of tools in this space e.g. Chef, Puppet, etc. These help with configuration, provisioning and monitoring.

Another benefit of having the capability of these self-service push button deployments is that it could also offer a means of performing a demo for team members. Another great opportunity to gain feedback.

Finally, it wouldn’t be one of my blogs without addressing Automation-in-Testing and those tests which require humans to run them (as much as I hate the term ‘manually‘). The notion of a delivery pipeline doesn’t mean this is exclusively for automated based tests but can also lend itself nicely to human based test effort. For example, you may be progressively writing automated tests and still need to run some of these ‘manually‘ in which case a push mechanism might not be the best solution. Instead, pulling artefacts in may prove to be a better option at the appropriate time e.g. if an artefact is severely borked then you’d be as well ceasing manual test effort and pull in a shiny new one with fixes in (no sense flogging a dead horse).

Continuous Integration. Continuous Testing. Continuous Delivery/Deployment. Continue to have the latest working code available and made visible to everyone. This should go some way to avoiding having to throw the ‘baby’ (see what I did there) out with the bath water when things don’t go according to plan.

Advertisements

One thought on “Put that in your Pipeline and Test it baby

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s