Thursday, March 7, 2013

The Only Way to Go Fast is to Start Well

As happens from time to time, there's a cross-blog debate stirring in response to something somebody posted. It's not quite as widespread as The Software Craftsmanship Debate from a year (or two? has it been that long) ago, but it's spreading. The catalyst this time was a post by Robert Martin entitled The Start-Up Trap.

The article is somewhat emotionally charged and comes across as being a bit dogmatic, which is Martin's style in general. (I'd be lying if I said I didn't exhibit a similar style.) But looking past the tone one can find the point pretty plainly... Businesses often delude themselves into believing that they don't need to write their software well. They're not "at that stage" or they need to focus their efforts on other things.

(What's worse, many times it's not the business which actually makes this decision. Many times it's made by the developers on behalf of the business without the business actually knowing. But that's another issue for another time. The point there is about professional behavior, and Martin has a whole book on that called The Clean Coder.)

Reacting to the dogmatism, there have been various responses to his post which try to focus more on pragmatism. One such post by Greg Young caught my attention as being one of the more equally-emotionally-charged responses. Another by Jimmy Bogard boils down to a much more level-headed statement, which I think really captured the pragmatism effectively.

And so, to reply to accusations of favoring dogmatism over pragmatism, Martin wrote a response. I feel like this settled the dust a bit, which is good. After all, we're talking about pragmatism. Do what's in the best interests of the business, that is the bottom line. There is no room for dogma on the bottom line. Or, to put it another way:
One developer's desire to be right does not outweigh the business' desire to reduce operational costs.
But in all of the back-and-forth, I think there's a very important issue to which we have only been alluding. And this issue is very much at the core of the communication (or, in many cases, the miscommunication) between all of the advocates on this debate. That is, if the question is:
When does TDD slow you down?
Then the answer is:
When your code isn't designed to be tested.
This, I think, is the great misconception around whether TDD speeds you up or slows you down. Young's article alludes to this when he describes his business venture and the application they prototyped. Martin validated this when he talked about a small one-off utility he wrote for one-time use (where he "fiddled with it" more than "designed it"). Neither of these applications were meant to be tested.

One can argue about the business sense of whether or not an application should be designed to be tested. Young uses an argument which I feel is very often overused, making it difficult to discern when it genuinely applies (which it indeed may have in his case, as I said it's difficult to discern). The argument that "it worked therefore it must have been correct."

Yes, many businesses succeed with crappy software. (Just look at Microsoft. Sorry, couldn't resist.) So, if they succeeded, how could the software have been bad? Evidence (the success) suggests that it was good, right?

This argument is just that, an argument, because to my knowledge there are no real numbers on either side of the equation. There are no double-blind studies, no usable case studies which aren't affected by tons of other variables. Did the company succeed because of the software, or did they succeed through the blood, sweat, and tears of their tireless employees despite the software? Most importantly, could they have been more successful with better software? How do you define and quantify success without anything else to compare it to?

I'm moving into a tangent. Sorry, I do that sometimes. Back to the main point... TDD only works when it's applied appropriately. It's not a magic wand which always makes all software better. This sounds a lot like the pragmatism being proposed by the various responses to Martin's original post. Which makes sense. But it's only part of the message.

Many times, when somebody says that they don't have time for TDD or that it will slow down their project, it's because their code wasn't designed to be tested, which itself is probably the deeper and more important issue here. The developers who make this argument often find themselves faced with the same difficulties when trying to write tests for their code:

  1. The code needs to be re-designed to accommodate the tests.
  2. The tests are slow to develop and not very effective, not providing any real business value in and of themselves.
Well, when faced with these difficulties, of course you're not going to want to write tests. They're making things harder and who wants that?

The problem here isn't that TDD is slowing you down, it's that your code is slowing you down. (Again, I fully recognize that sometimes you just need to fiddle something together until it works, and the various bits of what we'd call "good architecture" aren't necessarily required.) You're decrying TDD because it doesn't work for you, and you're assuming that TDD is to blame for this.

TDD is not just something to put on a checklist. It's not just a line item added to the project plan.
  • Is code written? Check.
  • Are tests written? Check.
  • I guess we're done, then.
No, this is an oversimplification. If the code is not testable then what value are the tests going to provide? You're writing tests (or trying to, anyway), but you're not practicing TDD. So to decry the merits of something you haven't even tried is a bit misguided. (I'm reminded of religious zealots who decry science with no understanding of the scientific method. You know, speaking of dogmatism.)

One does not simply "write tests" to practice TDD. One must also write testable code. Without that, the tests will indeed appear to be a hindrance. This isn't because writing tests is slow or because TDD is all dogmatism with no pragmatism, this is because the code is already rotting. The misdirected attempt to add tests and claim that it's TDD is suffering as a result of that rot.

TDD is not alone in this. Any good practice, when one attempts to apply it as little more than a checklist item to an already rotting codebase, will appear to be a hindrance. After all, it's hard to fix something that's broken. It's especially hard when we can't admit that it's broken or don't understand why it's broken. Of course, if we don't fix it, it will only become more broken.

This is where the communication between the opposing sides of the debate tends to break down.
"Writing tests adds work to the project, it slows us down!"
"No, writing tests speeds you up, you just don't know it!"
In between these two statements is the difference in the codebase itself. And since the code is the one and only real source of truth, we should stop arguing and go look at the code. Any time we have this argument, the best thing we can do is sit down over the code and pair program. Maybe one of you is right for a reason unknown to the other one. Maybe the other one has a point the first one didn't previously see. Discover it in the code.

No comments:

Post a Comment