Test Driven Development without Unit Tests?
My current project is the first that I have been involved in where we are really doing TDD in a meaningful way, but, with no veteran agilest on the team, we are feeling our way along as we go. Coming to the end of our first full sprint, things seem to be doing pretty well. Yet they don’t seem to be going like any TDD I have read about. So, I thought I would blog about what we are doing and see what feedback I get.
To set the stage, this is an asp.Net MVC 2.0 application written in C#. The website sits in a DMZ and the application and data layers are hosted on another set of machines behind a firewall.
In our effort to do true TDD, we start with a user story. From this we write one or more BDD stories using NBehave that execute against a controller in the web application. These tests are not unit tests of the controller however but integration tests. We are only mocking two sets of classes: 1) We are using the types in the System.Web.Abstractions assembly (or our own abstractions where necessary) to decouple our tests from ASP.net and the .Net framework. 2) As we go we define the methods needed on the interface to the application layer and mock the application layer via that interface. Thus our BDD stories test the whole of our web application code (excluding views) as an integrated whole.
Once we have these tests passing, we do a similar thing with our application layer. 1) We write BDD tests to against the interfaces defined in the previous exercise. 2) We write implementations that make the tests pass and in the process define and mock the interfaces needed for the data access layer and peripheral services.
Penultimately, we implement the peripheral services and write integration tests against their interface where practical.
Lastly, we go back to the web layer and implement the views. (Above we only stub out these views to the degree needed to all us to write our controller tests). We wait until the end to do the views because the views have to be human tested anyway and this allows us to basically do end-to-end systems tests while testing out the view under development.
We do this iteratively in tight vertical slices of functionality, and refactor in between iterations. In the end we have a nice little workflow that starts at testing the controller interface and ends with a reasonably rigorous albeit manual systems tests.
On the whole, this seems to be pretty solid TDD. Our automated tests are BDD tests that map directly to needed application behavior. We only write code to satisfy the tests and we can refactor at will inside the respective layers w/o having to modify the tests but knowing we have full functional coverage.
But note the lack of low level unit test. It is not the case that we don’t have any. We have a fair number, but they are not driving our TDD effort. The higher level BDD tests are doing that.
We write unit tests when we have a complicated piece of code that we want to test in isolation to make sure it handles all the edge cases. (We are using NUnit’s new data driven test cases for this sort of thing.) But we also write data driven NBehave tests as well at the integration layer so that we know the application as a whole handles a representative set of those edge cases.
The problem we have encountered with the lower level tests is their fragility. When we refactor, classes get broken up, or combined and the tests against them break and have to be retooled. The higher level abstractions and interfaces are more stable and therefore the integration tests against them remain valid in the face of the lower level refactorings.
The only concern I really have going forward with our approach is the possibility of the tests taking too long to run at some point. Right now we have about 260 test that run in just under 20 seconds. Given that some of these test are data driven tests that actually run hundreds of times the total count of test executions is actually over 1K. Considering this, if we do run into speed issues, the first line of attack may be to curtail some of the more extreme cases of data driven unit testing. Next on the block would be to omit the database integration tests which account for about one quarter of total execution time at this point.
In the end, thought, this is working for now but I would like to hear your thoughts on this and hear how you do it. Happy Coding!
January 28th, 2010 at 11:05 am
Great article. I would like to try this myself.
Would you be able to share your skeleton testing and and application layers with 1 or 2 vertical slices of functionality (with sensitive business info stripped). If not possible at all, It would be useful if you could share code snippets for these vertical slices or provide links to TDD/BDD resources you are using (URLs, Open source projects/Tools, books etc).
Thanks a lot.
Rad
January 29th, 2010 at 6:50 am
Rad,
I can’t post any real code from my current project, but I will see what I can do to get some examples up.
TDD is a lot like programming itself. You don’t find many tutorials on line for how to start and if you do they are really simple. There are basic concepts, with advanced techniques and best practices on top of that. There are even unit testing patterns and the like.
I started by reading the best books I could find. I have a list of my favorites. The list doesn’t contain any titles explicitly on TDD but many of them discuss it. From there I read blogs (see my side bar for my favorites) to keep up with current developments and see how others are doing things.
Mostly though, I have acquired my testing skills by trial and error. In my experience, writing good tests is a lot like doing integrals in calculus. There is no formula for how to do it and you won’t learn much by watching others. You need to do a lot of it yourself and develop a feel for what works in a given situation.
BDD is even worse in that I have found very very little written on it at the practical level. So I am pretty much making things up as I go and figuring out what works and what doesn’t. I will definitely try to find the time to post my lessons learned as I go.
–Ken
April 17th, 2010 at 2:30 pm
Thanks for a great article. My partner and I are the pioneers in our company for starting TDD. We are very new to TDD. We believe in it, but we are still fighting to get a true grasp of it. We started the other way around with writing all of these low level unit tests, trying to do TDD from the bottom up, but we are always running into the problems you mentioned, having tests that are constantly changing. It seems like TDD is not worth it when the tests make it HARDER to refactor. It seems to me that the lower down you go, the more brittle the tests seem to be.
So I started working the other way around, driving functionality from tests for my top most abstraction layer. I am able to start with a few tests that work well, and I am able to keep my red/green speed up, usually working on boundary cases and the like, but eventually I hit the test where the real work needs to be done, and I find myself writing a bunch of code. The test still covers it, but I am starting to write functions, classes, refactor and all. Do you think that is OK? I was thinking that maybe I needed to start writing tests on the lower level abstraction, but I have a hard time making that transition.