Test Driven Integration
Tuesday, June 22, 2010
By Philip Zembrod
In an earlier post on trying out TDD I wrote how my mindset while coding changed from fear of bugs in the new code to eager anticipation to see the new code run through and eventually pass the already written tests. Today I want to tell about integrating components by writing integration tests first.In a new project we decided to follow TDD from the start. We happily created components, “testing feature after feature into existence” (a phrase I love; I picked it up from a colleague), hitting a small test coverage of around 90% from the start. Obviously, when it came to integrating the components into a product, the obvious choice was to do that test-driven, too. So how did that go?
What I would have done traditionally was select a large enough set of components that, once integrated, should make up something I could play with. Since at least a minimum UI would be needed, plus something that does visible or useful things, preferably both, this something would likely have been largish, integrating quite a few components. With the playing around and tryout, I’d enter debugging, because of course it wouldn’t work at first attempt. The not-too-small number of integrated components would make tracking the cause of failures hard, and anticipating all this while coding, I’d have met the well-known fearful mindset again, slowing me down, as I described in my initial TDD post.
How did TDI change this game for me? I realized: With my unit test toolbox that can test any single component, I can also test an integration of 2 components regardless of whether they have a UI or do something visible. That was the key to a truly incremental process of small steps.
First, write the test for 2 components, run it and see it fail to make sure the integration code I’m about to write is actually executed by the test. Write that bit of code, run the test and see it succeed. If it still fails, fix what's broken and repeat. Finding what's broken in this mode is usually easy enough because the increments are small. If the test failure doesn’t make obvious what’s wrong, adding some verifications or some logging does the trick. A debugger should never be needed; automated tests are, after all, a bit like recorded debugging sessions that you can replay any time in the future.
Repeating this for the 3rd component added, the 4th, etc., I could watch my product grow, with new passing tests every day. Small steps, low risks in each, no fear of debugging; instead, continuous progress. Every day this roaring thrill: It works, it works! Something’s running that didn’t run yesterday. And tomorrow morning I’ll start another test that will run tomorrow evening, most likely. Or already at lunchtime. Imagine what kind of motivation and acceleration this would give you. Better, try it out for yourself. I hope you’ll be as amazed and excited as I am.
What are the benefits? As with plain TDD, I find this fun-factor, this replacement of dread of debugging by eagerness for writing the next test to be able to write and run the next code the most striking effect of TDI.
The process is also much more systematic. Once you have specified your expectations at each level of integration, you’ll verify them continuously in the future, just by running the tests. Compare that to how reproducible, thorough and lasting your verification of your integration would be if you’d done it manually.
And if you wrote an integration test for every function or feature that you cared about during integration, then you can make sure each of them is in shape any time by just running the tests. I suspect one can’t appreciate the level of confidence in the code that creates until one has experienced it. I find it amazing. I dare you to try it yourself!
P.S. On top of this come all the other usual benefits of well-tested code that would probably be redundant to enumerate here, so I won’t. ;-)


Google
Labels:
Philip Zembrod
It's not very clear from this post whether the author ever includes the UI in the integration tests or not.
ReplyDeletePersonally I think that's where the Presentation Model pattern really shines: it allows you to strip the thin view layer from your fully integrated application and do automated integration tests by invoking the presentation model(s).
In the Growing Object-Oriented Software book, which Misko recommended, the proposed technique is to start with a higher-level test (integration/acceptance/functional) and only then go to the lower level of unit tests and classes. This way, you are sure not to write unit tests and classes that YAGN, because the current failing acceptance test gives you the big picture.
ReplyDelete@Giorgio That sounds like good advice for an existing application that has little to now test coverage, especially one that is in maintenance mode as there is likely going to be large chunks that will never change. However, the application discussed sounds as if it was new, and it was written w/TDD to begin with so the risk of ending up with unit tests that YAGN is low as there's already a lot of coverage.
ReplyDeleteI'm with @Wim on this one as testing through the UI is frought with peril:
http://blog.objectmentor.com/articles/2009/09/29/ruining-your-test-automation-strategy
http://blog.objectmentor.com/articles/2010/01/04/ui-test-automation-tools-are-snake-oil
-Mark
@Mark Roddy: The approach in the GOOS book is not presented for "an existing application that has little to no test coverage", but for greenfield projects. The end-to-end tests are the first lines written in the project.
ReplyDeleteThe goal is to solve all the integration problems (which are usually high-risk) at the beginning of the project. I also perceive that it helps in driving the design of the lower level code so, that all pieces will integrate well together.
If you start by writing the lower level components, then when you get to the point of integrating them, it's possible that their design makes it hard to integrate them and the design must be changed, because the code was not designed for integrability since day one. The same way as using TDD makes the code designed for testability, because the tests are written first, starting with end-to-end tests makes the code designed for integrability, because the code is integrated first.
I think that we could avoid all this complication using ATDD (Acceptance TDD) + TDD.
ReplyDeleteI good resource is the already mentioned book: Growing Object Oriented Software Guided by Tests.
There is also other book Test Driven: TDD and Acceptance TDD for Java Developers by Lasse Koskela
Others Resources:
Using Customer Tests to Drive Development
http://www.methodsandtools.com/archive/archive.php?id=23
Slidshare (ATDD)
http://www.slideshare.net/nashjain/acceptance-test-driven-development-350264
TDD with Acceptance Tests and Unit Tests
http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests
Thanks for the comments and pointers.
ReplyDelete@Wim Coenen: The project I wrote about contains very little GUI, which is tested through the model, not the UI. However, the tests do cover command line invocations by the user and emails sent to the user. Depending on your point of view you might consider those part of the UI.