Quality shouldn’t be testing

Quality shouldn’t be testing

The comic above makes a good point. It is common to expect code to include executable tests that demostrate that it works as expected. Even if you deliver on time, have team full of experienced engineers and otherwise describe yourself as putting quality first face with a code base without test I’ll be thinking

  • You delivered on time but there are no tests – what other corner have been cut?
  • You have experienced engineers that don’t write test – your definition of experienced is different to mine
  • You say you put quality first but you don’t measure it – How do you know you are delivering a quality product

From my perspective reputation and credibility is eroded if I can’t see your quality assurance process in the code.  Having worked on solutions, some that had automated tests and some that didn’t, I have first hand experience of the compromises that are made when some form of automated testing in not there.


There is a good way to test and a bad way to test. Often, you see tests that simply test that the code does what the code does. What use is that? Instead the tests should represent the requirements and therefore assert that the code meets the needs of the business. So while it can be painful to discipline yourself to write test firsts it often results in a solution that better fits the requirements.  It takes time and effort to get there.

Developers and testers have different skills, they think differently.


The roles should complement each other. Testers can show developers how to think critically about their solutions whereas developers can help testers with the coding skills that are increasingly required to build a reliable and repeatable test suites.

Writing code, testing it, finding bugs and fixing them up is a natural cycle of software development. It is said that people learn more from mistakes so perhaps creating, understanding then fixing bugs makes a software team better.


Rather than focusing on stamping out bugs the focus should be on implementing a series of practices that increase the quality of the team output, which would include more than automated testing. The team needs to understand how to write good code, continuously improve it through refactoring and find ways to continuous measure the quality of their output. That is not only in terms of test coverage, but measuring other metrics such as the cycle time to deliver a story to the definition of done and code analytics like cyclometric complexity and method or function size. Thinking about it, it is not about testing code, instead we should be focusing on measuring quality metrics.


AngularJS, Protractor and TeamCity – Part2

AngularJS, Protractor and TeamCity – Part2

In my last post, I covered the basics for creating a protractor test suite to test an AngularJS application. The next step for all self-respecting developers is having the suite running as part of a CI/CD pipeline. I promised to outline how you might do this with TeamCity. Before I do that, I want to cover some of the stages that you might miss.

Writing a protractor test suite boils down to writing JavaScript. As with any development effort this might result in good code or it might result in a ball of mud. Whilst the success of the test suite is dependent on the quality of the tests, good automated suites are also dependent on the quality of the code that makes them up. The automated test suite will run regularly, so it needs to be reliable and efficient. The software will evolve, it will change, so the tests will need to change too. Your test code needs to be simple and easy to maintain. The software principles that apply to production code such as SOLID, should also apply to your test code.

Testing Patterns

A typical pattern used when writing automated end 2 end tests is Page Objects. The idea is that the complexity of driving a particular page is removed from the test itself and encapsulated in a page object. The page object has intimate knowledge of a page in the application and exposes common features that the tests will need. The pattern reduces the amount of repeated code and provides an obvious point of reuse. The page object evolves at the same rate as the underlying application functionality. Done correctly the tests themselves become decoupled from changes in the application.

As the page object and the underlying page end up in step it becomes natural for the developer and the tester to work closely. The developer helps the tester understand the capability of the page. The tester then builds a page object to expose this functionality to tests. Done correctly this collaboration drives out an application that is easy to test. As new functionality is created the tester can influence the developer to write the page in a way that is easy for them. One common area is the use of selectors to find elements within the page’s DOM.

Tests written after a page is finished often result in complex and/or unreliable selectors. Complex XPath expressions or custom logic to search the DOM tree for the correct element is common. Tests written like that become hard to maintain. On the other hand, if the tester and developer work together and importantly, the developer understands how their work will be tested, then pages that are easy to test are the result.

Testing the Tests

Writing test code starts to look and feel like writing production code. Similar tooling should be in place. Don’t let your testers struggle with basic editors when your developers are using fully featured IDEs with syntax highlight and intelli-sense. Testers working with protractor should use a decent editor such as VSCode, Sublime or ATOM with a compliment of JavaScript plugins. In-particular use linting tools to ensure that the JavaScript written for tests matches similar style guidelines as the main application AngularJS code.

You should also ensure that it is easy to test your test. This link demonstrates how to set up VSCode to run a test specification by hitting F5. This article itself advertises that you are able to debug protractor tests. Whilst I have managed to hit breakpoints in my tests the debugging experience has fallen well short of my expectation. I have not had the time to investigate further.

Team City Integration

The final step of this is to have your test suite running as part of your CI/CD pipeline. Protractor is a e2e testing framework. End 2 end tests may take some time to execute. Large test suites may take several hours. CI builds should take minutes not hours so it is advisable to run these types of test suite overnight. Daily feedback is better than no feedback at all. If you have a deployment build that deploys to a specific environment each night you could tag execution of the test suite to the end of that. If you deploy to an environment less regularly you can still run the suite each night.

TeamCity happens to offer comprehensive statistical analysis of repeat test runs. It provides trends that show if particular tests are getting slower and it will also highlight tests that are flaky – i.e. ones that sometimes work and sometimes fail even if nothing has changed.

The simplest way to integrate your suite into TeamCity is to use the TeamCity Jasmine reporter. You can add the reporter to your conf.js like this. Don’t forget to add it your package.json file.

exports.config = {
    framework: "jasmine2",
    capabilities: {
        browserName: "chrome"
    onPrepare() {
        let jasmineReporters = require("jasmine-reporters");
        jasmine.getEnv().addReporter(new jasmineReporters.TeamCityReporter());

In the last post, I created a gulp task to execute the suite. The final step is to add a build step to execute it.


I have a npm install step that ensure all the dependencies required by the tests are downloaded. I’m then executing gulp e2e to run the test.

You can see this is working on the build summary. You can see a count of passing and failing tests.protractor2

As your test suite is outputting results just the way TeamCity likes them you can drill into your tests and get all sort of useful stats, like this.


AngularJS, Protractor and TeamCity

AngularJS, Protractor and TeamCity

The go to solution for creating automated end to end tests when building AngularJS applications is Protractor. In this post, I am not going to explain how to write tests using Protractor. Instead I’m going to cover some of the things I discovered when trying to integrate a suite of Protractor tests into a TeamCity based CI/CD pipeline.

Protractor Basics

The following article cover the basics of Protractor. This formed the basic understanding that I needed to work out where all the moving parts were and therefore work out how best to integrate into a CI/CD pipeline.


Protractor is a node module so I’ll assume that you have nodejs installed on your development machine. Once you have that you need to get protractor using

npm install protractor -g

Protractor relies on drivers for the various web browsers. It will execute your E2E test scenarios directly in the browser so this bit is important. The drivers in turn use Selenium, a common testing framework.

That is a lot of moving parts I hear you say. Luckily it is relatively simple to get this up and running. Once you have Protractor ensure you have all the latest webdriver bits

webdriver-manager update

This ensures that all the latest webdriver and selenium versions are downloaded and configured correctly. The final step is to start everything up. You can do that with

webdriver-manager start

You may hit your first hurdle at this point. Selenium is a Java based solution and as such requires Java to be installed on your development machine. So if you have a problem at this stage go off to Oracle’s site, download Java and install it. You’ll need the latest JDK.

In order to continue you’ll need an example test spec, spec.js. I used this one which is part of the Protractor tutorial.

describe('Protractor Demo App', function() {
    var firstNumber = element(by.model('first'));
    var secondNumber = element(by.model('second'));
    var goButton = element(by.id('gobutton'));
    var latestResult = element(by.binding('latest'));
    var history = element.all(by.repeater('result in memory'));

    function add(a, b) {

     beforeEach(function() {

     it('should have a history', function() {
         add(1, 2);
         add(3, 4);


         add(5, 6);

         expect(history.count()).toEqual(3); // This is wrong!

You also need a configuration file, conf.js. This is the one used in the tutorial.

exports.config = {
    framework: 'jasmine',
    seleniumAddress: 'http://localhost:4444/wd/hub',
    specs: ['spec.js']

To run this selenium must be running locally exposing an endpoint on port 4444. But if you remove the line starting seleniumAddress… guess what… an instance is started for you automatically. With the modified conf.js file you should be able to run your test with the following command.

protractor conf.js

This should work if

  1. You have installed protractor globally
  2. You have ensured that the web drivers and selenium are updated.
  3. And all the versions of protractor, your browser and its particular web driver are all aligned

When I was trying to make an old implementation of a Protractor test suite CI/CD ready. It turned out that it was using an old version of protractor so updating the web drivers downloaded old versions. The machine I was testing this on happened to have an old version of chrome so all seemed to work. This subsequently failed when other people tried because they were using the latest versions of chrome. An upgrade to protractor fixed that but then caused problems for people stuck on older versions.

You might think why are people on old versions of chrome? Blame centralised management of software in big organisations”

So the lesson I learned here was to always start with the latest versions, even when picking up older implementations. The version of the framework and the browser matter.

Making this CI/CD Friendlier

With the CI/CD pipeline I was working with I didn’t want to have an instance of the webdrivers and Selenium running constantly. Neither did I want to have to manually start up the web driver on each build. As Gulp was being used for other purposes I happened on gulp-angular-protractor which provides a wrapper for all of these things. Effectively it updates the webdrivers and starts them for you and then clears them down when you have finished. This meant that I could have a single gulp task that executed my test suite.

A good build is self-contained and minimises dependencies on what is installed on the build server. The packages I need are installed locally. In order to create a package.json file install the required packaged like this

npm install protractor -save-dev
npm install gulp-angular-protractor -save-dev
npm install gulp -save-dev

Although the final step installs gulp locally it also needs to be installed globally too. I have no idea why but it does. You can then create a Gulp file like this one

let gulp = require("gulp");
let gulpProtractorAngular = require("gulp-angular-protractor");

gulp.task("runtests", callback => {
            configFile: "protractor.conf.js",
            debug: false,
            autoStartStopServer: true
        .on("error", e => {
        .on("end", callback);

Lets break this down.

First, I’m bringing in gulp and gulp-angular-protractor so I can use them later. Next, I’m creating a gulp task called “runtest”. The rest basically says execute conf.js with spec.js as the test spec file.

As this point you have a basic spec and conf file that can be executed from a Gulp task. This is enough to start building some real test specs and checking them out. Come back next time to see what you need to do to integrate this properly with TeamCity.

Chaos Driven Development – Agile Anti Patterns

Chaos Driven Development – Agile Anti Patterns

I guess if you are reading this you will have some understanding of Test Driven Development or TDD. Whether you buy into it or not, the basic concept is to incrementally build up a suite of tests that act as a framework in which to build your code. The test suite acts as an executable specification that determines whether the code fulfils its purpose. It can also be used to ensure that only the code that is needed is written. If code is written that is not causing a failing test to pass or improve the overall quality of the code base, then does this code have value?

I like to think of the test suite acting as pressure that drives a solution forwards in a given direction. A test suite is not the only type of pressure that can be applied to development, there are other types. The one I’d like to focus on here is chaos.


  • Complete disorder and confusion

It is not really chaos itself that is the pressure, it is the emotional effects that result from it. Okay, some people thrive in chaos but for the rest of us, it generates worry and concern. Some people switch on the heroics but others become stressed or are unable to act through the fear of doing something wrong. Fear Driven Development is a related concept.

With TDD the developer is motivated to write code that turns the light green. This often (although not always) generates code that is simple, logical and easy to maintain. On the other hand, when people are under pressure due to chaos, the motivation is to remove the pressure at all costs. This leads to short term thinking, designs and good intentions go out of the window and “good enough” and “that will do” become the norms. Unfortunately, the short term thinking creeps out to other stakeholders and suddenly the developers who are hacking at code to alleviate the pressure get the kudos by being rewarded for their heroics.

With TDD people often forget the refactoring bit. In Chaos Driven Development you don’t have the chance to improve things. You are already a hero and you’ll be needed to go into battle fighting the next fire which is even more likely because there is never enough time to fix things properly.

Test Driven Development Chaos Driven Development
Can simplify the code through Red – Green – Refactor. Can generate spaghetti code by patching the symptom rather than understanding the root cause.
Testing is built in and automated. “It worked on my machine!”
Regression Safety Praying a fix hasn’t introduced another problem and “that was working last week!”
Code Ownership through collaboration “I’ve got this” and “X is leaving, what do we do now!”

Chaos Driven Development is unsustainable. It leads to stress and low morale which leads to people leaving. When someone leaves, particularly if they are considered a hero, it is just another fire to put out which adds pressure to a team that might already be at breaking point. The problem is only going to get worst – the fire will start to burn out of control. And when that happens often the only option is to build some fire breaks around it and let it burn out.

When is a bug not a bug?

When is a bug not a bug?

Testing is a very different skill set to software development. Testers look at the problems that software developers are solving through a different lens. They are analytical and are ruthless in finding the problems hidden within the most finely crafted software.

In traditional software delivery methodologies, testing follows development, just as night follows day. To development teams, testing is hindrance and to testers, software developers are people who cut corners and put the team under pressure to release sub-standard software. The two roles are working against each other which can made for a stressful working environment.

In Agile delivery methodologies, rather than fighting each other, the two roles work together. Testers help developers build code that is easier to test and developers show testers at first hand just how complex software development can be. The roles combine to produce high quality automated testing solutions.

As I said above testers are analytical. They are driven to find problems and when they do find them they want to document what they have found. They describe how to reproduce the issue, its severity, based on a set of clear classifications and then build up a log of all discovered issues so they can determine whether they have been fixed or to be used to determine if issues reoccur in the future. What they like to do is find and then document bugs.

In Agile delivery things are different. An user story is either done or it isn’t. Testers have a valuable role in determining whether a story is done. Done could mean that the story is released to Live or more likely it has been deployed to a pre-production environment, it is working correctly and it is waiting to be deployed into production by another team. When the story is started the developer and tester define the acceptance criteria for the story. These are the criteria that defines if the story is done. These can be turned into automated acceptance tests but they could simple be a set of tasks that are checked manually. Development on the story continues until the acceptance criteria are met.  If the tester finds that the criteria are not met the developers undertake more tasks until they do.

Where is the documentation that ensures that issues found during development are not reintroduced?

These are the acceptance criteria for the story and this is why is a useful to build them up and automate them over time. Once they are in the acceptance test suite a successful run gives everyone confidence that no issues have been introduced. If it fails you can usually pinpoint the problem by identifying which test has failed. So in my opinion an issue found by a tester in a story that is in progress is not a bug, it is simply represents more work.

Does this mean that there are never any bugs?

This approach greatly reduces problems getting into live and also helps you to avoid reintroducing problems. However that is not to say that your software is problem free. Customers may report problems or exploratory testing might find issues not covered by acceptance testing. Finally automated error logging may identify problems neither your customer nor your testers can see.

These are bugs and they should enter your backlog in the same way as any other work. Once identified it is up the tester to provide a severity, from the product being completely broken and inaccessible, through to the product not working as designed but there being work arounds, to simple cosmetic issues. Severity should not be confused with priority. They can be related but not always. For example a typo is low severity bug but it is high priority in the backlog if the typo is your best customers name!

In traditional software development, long list of bugs were sometimes treated like a badge of honour by the testing team. It demonstrates the effectiveness of the testing team. Test exit reports which include counts of bug by severity become a bargaining chip between customer and supplier as to whether a product can be put live. In Agile software development low bug counts are worn as a badge of honour by the whole team which indicates the quality of the software they are delivering.



The Automated Unit Testing Journey

The Automated Unit Testing Journey

Test Driven Development or TDD is very common in modern software teams but is not universal. Many career developers are only just stepping, blinking into the light of what it takes to be a successful developer in today’s software projects. There are many stones yet to be turned which surface more developers who are still to go on the TDD journey. This post is about the things you might need to do to walk the path to TDD enlightenment.

If you want a team to do TDD the very last thing you should do is mandate it. It is a bit like telling your child to clean their room. You know by telling them to do something, it will be the very last thing that they will do. No one likes being told they have to do something. At best the team will go through the motions, make token efforts and blame you when every task take twice as long because “we have to do TDD”. At worst you will have downright descent where every discussion becomes an argument about why TDD is a bad idea.

If you want a team to do TDD they have to want to do it. You have to create a movement

Take the first step

You’ll often work with codebases with little or no test coverage. Spend time reviewing the code on your own or with other developers. Look for the core business logic and the parts of the system that are commonly used. Start adding tests around these areas yourself. Depending on your role, the type of project and the organisation you work for you might have to do this in your own time.

Find your first followers

Writing tests yourself will only go so far. If you are not careful your helpfulness will become a hindrance. For example, when someone starts to refactor the code base and finds that lots of your tests no longer compile. If you can’t create a following, you are just this mad guy causing lots of problems. So, be public about what you are doing and why. Pair with other programmers and show them how things work. If you do create a problem, drop everything and sort it out.

Move to the Tipping point

Once you have a few followers publically demonstrating what they are doing more people will join you. Use that time to make the process smooth. Is it easy to run the test suite and does it execute quickly? Fix problems you find so it becomes more difficult for people not to join in.

Cross the tipping point

Soon you’ll have more people involved than are resisting. The momentum your following generates eventually makes it harder for people not to follow. This includes ensuring that test suites run as part of a continuous integration setup and the code is refactored to make it easily testable. The codebase will become simpler and easier to maintain. Over time clear design patterns will emerge… patterns that are easier to test.

The following

Soon the advantages of unit testing will be clear to the team and those interacting with them. The team will be more confident maintaining the code, many existing code issues will have been identified and resolved. Perhaps the tests will have stopped mistakes finding their way into live.

And here is the rub, I haven’t written about TDD yet. Code that is more testable tends to make developers think about how they will test code as and (hopefully) before they write it. Your following will self-organise around the best way to provide automated testing and this may lead to TDD.

I never tell or even ask teams to do TDD. I don’t evangelise that red, green refactor is the only one true way. I have seen enough Test Induced Damage from teams that have done this religiously without really understanding what they were are aiming at. What I ask for is the team to be good engineers and provide some level of test coverage for regression confidence and start a journey towards regular stable and reliable releases. If TDD emerges from this then all well and good but TDD is simply a tool or a technique, it is not the goal.

This post and the idea of building a movement around automated unit testing is inspired by this TED talk. It is one of my favourites.