My current project is the first that I have been involved in where we are really doing TDD in a meaningful way, but, with no veteran agilest on the team, we are feeling our way along as we go. Coming to the end of our first full sprint, things seem to be doing pretty well. Yet they don’t seem to be going like any TDD I have read about. So, I thought I would blog about what we are doing and see what feedback I get.

To set the stage, this is an asp.Net MVC 2.0 application written in C#. The website sits in a DMZ and the application and data layers are hosted on another set of machines behind a firewall.

In our effort to do true TDD, we start with a user story. From this we write one or more BDD stories using NBehave that execute against a controller in the web application. These tests are not unit tests of the controller however but integration tests. We are only mocking two sets of classes: 1) We are using the types in the System.Web.Abstractions assembly (or our own abstractions where necessary) to decouple our tests from ASP.net and the .Net framework. 2) As we go we define the methods needed on the interface to the application layer and mock the application layer via that interface. Thus our BDD stories test the whole of our web application code (excluding views) as an integrated whole.

Once we have these tests passing, we do a similar thing with our application layer. 1) We write BDD tests to against the interfaces defined in the previous exercise. 2) We write implementations that make the tests pass and in the process define and mock the interfaces needed for the data access layer and peripheral services.

Penultimately, we implement the peripheral services and write integration tests against their interface where practical.

Lastly, we go back to the web layer and implement the views. (Above we only stub out these views to the degree needed to all us to write our controller tests). We wait until the end to do the views because the views have to be human tested anyway and this allows us to basically do end-to-end systems tests while testing out the view under development.

We do this iteratively in tight vertical slices of functionality, and refactor in between iterations. In the end we have a nice little workflow that starts at testing the controller interface and ends with a reasonably rigorous albeit manual systems tests.

On the whole, this seems to be pretty solid TDD. Our automated tests are BDD tests that map directly to needed application behavior. We only write code to satisfy the tests and we can refactor at will inside the respective layers w/o having to modify the tests but knowing we have full functional coverage.

But note the lack of low level unit test. It is not the case that we don’t have any. We have a fair number, but they are not driving our TDD effort. The higher level BDD tests are doing that.
We write unit tests when we have a complicated piece of code that we want to test in isolation to make sure it handles all the edge cases. (We are using NUnit’s new data driven test cases for this sort of thing.) But we also write data driven NBehave tests as well at the integration layer so that we know the application as a whole handles a representative set of those edge cases.

The problem we have encountered with the lower level tests is their fragility. When we refactor, classes get broken up, or combined and the tests against them break and have to be retooled. The higher level abstractions and interfaces are more stable and therefore the integration tests against them remain valid in the face of the lower level refactorings.

The only concern I really have going forward with our approach is the possibility of the tests taking too long to run at some point. Right now we have about 260 test that run in just under 20 seconds. Given that some of these test are data driven tests that actually run hundreds of times the total count of test executions is actually over 1K. Considering this, if we do run into speed issues, the first line of attack may be to curtail some of the more extreme cases of data driven unit testing. Next on the block would be to omit the database integration tests which account for about one quarter of total execution time at this point.

In the end, thought, this is working for now but I would like to hear your thoughts on this and hear how you do it. Happy Coding!