The Rule of Failed Integration Build


What to do if an integration build fails? By failing I mean either there is a compilation error or an automated test fails. The general rule in most of the teams that I worked with is that this situation should be treated with the highest priority. The developer who has caused this problem is responsible to fix it and act immediately. He’s blocking whole team.

That sounds logical. So if you take a look at history of an integration build, it should be mostly green. If there is a yellow/red color with meaning of failed test or compilation, it should be immediately followed by a nice fixing green build. But what is the reality?

My empirical observation is that the more integration tests a project has, the less stable the builds are. Nothing more, nothing less. I don’t want to blame integration tests again, even if I’d like to. There are two obvious approaches how to deal with this. The stupid one is not to write integration tests. You hear me? That’s not a solution.

The better one is to minimize them. Define a minimal set of integration tests that completely cover all intergation aspects of the tested software. For everything else write unit tests.

For example I’d like to check if my application integrates correctly with Twitter REST API. I am not going to write a big integration test that involves logging into my web app, clicking a HTML link, going through all layers and then at some point it finally calls Twitter. I will write a few tests for each interface method of a component that connects my app with Twitter. That perfectly serves the purpose. It doesn’t test anything else which means that it also doesn’t fail for any other reason than failed integration with Twitter.

That’s how I like it. A minimal complete set of integration tests.


Leave a Reply