The Algebra of TDD

I hated algebra in high school. Its not that I wasn't good at it, I just didn't like it. Encyclopedia Brown summed up my attitude about math back then (I don't remember the story): doing math is like swatting flies: not so much hard, as tedious.

That's how I felt in High School. But as time passed, my attitude toward math began to change. The turning point came in college when, as an encore to taking formal logic, I decided to enroll in the follow-on course on symbolic logic. Although I didn't know it at the time, I was essentially learning boolean algebra, taking the first steps to my eventual second major, laying the foundation for my career as a programmer -- and awakening a love for the logic and beauty of mathematics that continues to this day.

So, while it would seem ironic to my high school algebra teachers, its really not too surprising that I recently found myself using algebra to explain how I approach Test Driven Development (TDD).

Pairing with those new to TDD, I am often asked: "Why did you choose to write the test like that?" Until recently, I haven't been able to answer this to my (or their) satisfaction. I would attribute it to experience, or profess that it just seemed right, and generally sound convincingly unconvincing.

But I am fortunate to work with people, who aren't content with such evasive answers, and while trying to express my thinking, I suddenly saw the test and code as a pair of algebraic equations that are in turn equal to each other. Running with the metaphor, I ended up the following formulae:

x = y y = a + g + R x = i + n + A

Where: x is the production code y is the test a are the asserts in the test g is the glue code in the test i is the interface of the production code n is the new implementation R real code (in fixture or production, mocked or otherwise) A is other API's or methods

And now I will use them to describe a Test Driven Development:

The typical TDD approach is to start with a or g, include whatever R you need, and having thus crafted y, you can solve for x (since x is now defined by y, and all you have to do is go code it). For example, let's say I am writing a class to do math operations for a calculator. A common starting point to write an assert:

(this is a) Assert.AreEqual(42, obj.Result)

Then add the glue:

(this is g)
public void NumbersCanBeAddedTogether() { 
    Calculator obj = Calculator.new
    obj.Enter(40)
    obj.Plus()
    obj.Enter(2)
    obj.Total()
    Assert.AreEqual(42, obj.Result)
}

(We have no Mocks or other classes here, so we can say R = 0)

No we substitute the above for y in x = y and solve for x (i.e. go implement Enter, Add and Total).

This is where the new TDD'r starts peppering me with questions:

"How'd you know to assert against result?" "Why'd you use Enter() Plus() and Total() instead of result = obj.Add(n, n2)?" "Why call it Total() and not Equals()?"

And so forth.

Or in the metaphor: "How'd you know what to put in for a, g and R?" What I have realized (and what led me to using Algebra as an explanation) is that I often don't solve for x. Instead, I assume (at least some elements of) x and solve for y -- even though I still write y first.

There are certainly times when I have no idea what I want my code to look like, and so I write (and rewrite) asserts trying to figure out what my class's interface will look like. But more often, I already have a pretty good idea of how I want my interface to look, or what API's I want to use, and I am instead trying to craft a test that matches this mental design and proves it out in code.

So while it looks like I am defining y and solving for x (i.e. being deductive) I actually approach the problem inductively (assuming x and solving for y) like this:

(me in my head): "Hmmm, I think I'd like my Calculator to mirror how a calculator's actually used. People generally input a number, then the operator, then the second number and ask for their result, so let's start there..."

And out comes the value for "a".

Then, comes: "So now I want to go back and mirror the rest of the "calculator behavior... Oh I know..."

and voila! I have expressed "g".

next, me says: "You know Totalling is generally done by hitting Equals, but that word has a special meaning in this programming language, so instead of calling that method Equals() let's call it Total()"

and so forth.

And while my logic professor might suggest that the above looks a lot like an "Inductive proof" my XP colleagues are probably either remonstrating me for doing too much up-front design, or not communicating all of this to my pair as I go, but the fact is, until recently I didn't even realize I was doing it. I just sorta saw x and solved for y.

So, by putting this metaphor into words, I now realize that I use both deductive and inductive approaches when working test-first. I have also begun to question whether we put too much emphasis on the deductive approach (defining y to solve for x) when teaching TDD to others. But - just as Induction comes after Deduction in my Formal Logic book - perhaps its better to learn to define y and solve x first so that we don't forget to balance the overall equation by solving x = i + n + A + R, and skipping x = y all together -- something that Encyclopedia Brown, my Algebra Teachers, my Logic Professor and my XP Colleagues would all agree is a bad idea.