Test First & Code First
This post really hit a chord. I'm a firm believer in the values of tests. They're essenstial to have heathly software. But I often find myself in situations where a new functionality is just isn't testable. It's not testable because I've no idea how it is going to look until I write it.
Let's take a recent scenario, I wanted to add a rule that checks some variables within a spesific range. So I create a test that looks like this:
public void AtLeast5OccurancesInMonth()
{
Calendar calendar= //.. create calender ...
Rule rule = NumberOfOccurances.Min(5, Period.Month, Severity.Error);
List<RuleResult> results = new List<RuleResult>();
rule.Apply(calendar, results);
Assert.AreEqual(3, results.Count);
}
When I get to the point where I write the rule, I find that I don't have any way to implement this rule because I've no way to get a month from a calendar. So to write this test I needed to have a way to get months from a calendar.
I supposed that I could've written a set of tests for getting months from calendars. But I would've lost the state that I was in, so I just added the ability to get a collection of months from a calendar (which does require some logic) and wrote the code to implement the test. Later on, I went ahead and wrote the tests for the months property, of course.
A TDD Purist would note that I probably needed to just stub the Apply method to just return 3 RuleResults, and advance from there. In my opinion, such baby steps are dementing. Your milage might vary.
As an aside, as I was writing this post I got an email on the TDD Mailing List on just this subject. I'm going to repreduce my answer here in full:
I can tell you that I had /several/ cases where I knew that I wanted to add new functionality to my code and I wrote a test, run it, and it worked!
I was absolutely astounded, and started a debug fest that lasted thirty minutes to discover that yes; the code already does what I want.
I just had a moment like that two days ago, when I expected a result of 1 and got 28, and I realized that the code was doing the expected thing, but that my understanding of the system was flawed. You can check out the post about it the first surprise I got here.
In any complex system, you're going to get into situations where you can't really understand all the interactions in the systems. Maybe you want to get X when A, B & C are in a certain state. Unless you've a really good memory, you probably can't recall of the top of your head all the code paths that a test may exercise. Add in inheritance, polymorphism and a host of other GoF patterns, and you can very easily have functionality in the software that you don't realize that you have.
I often get to the point where I'm 90% done, and I need to add the final 40%, and I write a test in order to debug it to see what is going on. At a minimum, you can see how and why your test fails.
Of course, after saying all of that, I must confess that I sometimes writes code first, and tests later. But I really believe that once you're beyond the 50% point, you should precede each step with a test. Before that, it depends on the level of complexity in the system.
Well, what is my stance in this subject. The answer is that it depends :-)
When I'm just starting to write a system, I often write a little code, and then write the tests for it. I may experiment with a lot of different implementations until I decide that this or that is the currect way to go. Because of the very fluidity (is that a word? Apperantly it is, hard English langauge) of the project in the opening stages, I don't invest any time in writing tests that I'll end up throwing away after a couple of days. I just write the code to have something to start working on. When I'm starting to write the actual production code, I write enough until I get too nervous to continue. This usually start the moment I write code that relies on code that I haven't tested yet.
My algorithm for it very simple:
while(true)
{
WriteCode();
while ( IsNervousAboutCode() )
{
WriteTests();
}
}
Not very sophisticated, but it works for me right now. When I start to get into the point where I'm building functionality on top of functionality, I revert to the good ole test-first pattern. Especially since then I can get a real value from designing the way the code will look and act when I'm writing the tests.
I find that in the beginning of a project writing the tests first is a problem for me since I actually don't know what I'll do next. I've a list of features, and I've 0 lines of code. What do I do then? I start writing the domain objects that I can see right now and write a test at the interaction points.
Something that I found very useful is write the stupid tests. The stupid tests are tests that test functionality that you know are going to work. Creating an object and asserting that the properties are currect, for instance. I keep catching stupid mistakes there that pass the compiler. This is one of the most common mistakes I make:
{
this._name = Name;
}
Can you spot the bug? The tests will catch it and make sure that I won't ever break it.
It seems that I'm making judgement calls based on gut feeling. So it may not be very helpful for the readers :-)
Comments
Comment preview