Monday, July 2, 2012

No Time To Test?

I recently had a conversation with a colleague and we briefly touched on the subject of TDD. It wasn't the focus of the conversation at all, so I didn't want to derail things when he said this, but I can't get past what was said...
Clients rarely want to pay for the extra time to have tests.
I'm not at all shocked by the statement. Quite the contrary, I'd be pleasantly surprised to find anything else. But the commonality of the statement alone doesn't justify it. If our clients are refusing to pay more for test suites in the software that we write, then it's a failure on our part to properly address the issue.

We all know how it goes. A client has a project they want done; We put together an estimate with a rough breakdown of the work; And inevitably we need to cut off a few corners to get the numbers to line up with expectations. What's the first thing to get dropped? "Well, our users can test the software, so we don't need all this testing in the project budget."

When a statement like that is uttered, this is what a developer hears:
We don't need you to prove that your software works. You can just claim that it does.
Seriously, that's exactly what I hear. But of course it never goes down like that. What does end up happening? We all know the story. Bugs are filed, users are impatient, proper QA practices aren't followed, and so on and so on. Time adds up. We spend so much time chasing the "bugs" that we lose time constructing and polishing the product itself. So the end result is something that barely limps over the finish line, held up by a team of otherwise skilled and respected people who can maintain the thin veneer of polish just long enough, already in dire need of clean-up work on day one.

So tell me again how we didn't have time to validate our work.

Recently I've been committing a lot of my free time to a side project. It's a small business (which in no way competes with my employer, mind you, for the record) where I've been brought in to help with technology infrastructure and, for the most part, write some software to help streamline operations and reduce internal costs. Since I essentially have carte blanche authority to define everything about this software, I'm disciplining myself to stick with one central concept... TDD

I'm not the best at it, and I'll readily admit to that. My unit tests aren't very well-defined, and my code coverage isn't as good as it could be yet. But I'm getting better. (And that's partly the idea, really... to improve my skills in this area so that I can more effectively use them throughout my professional career.) However, even with my comparatively amateur unit testing (and integration testing) skills, I've put together a pretty comprehensive suite on my first pass at the codebase.

And you know what? It honestly makes development go faster.

Sure, I have to put all this time and effort into the tests. But what's the outcome of that effort? The code itself becomes rock-solid. At any point I can validate the entire domain at the click of a button. Even with full integration tests which actually hit databases and external services, the whole thing takes only a few minutes to run. (Most of that time is spent with set-up and tear-down of the database for each atomic test.)

What do I need to work on in the code right now? Run the tests and see what fails. What was I doing when I got interrupted yesterday and couldn't get back to it until today? Run the tests and see what fails. Everything passes? Grab the next task/story/etc. The time savings in context switching alone are immense.

Then there's the time savings in adding new features. Once I have my tests, coding the new feature is a breeze. I've already defined how it's supposed to work. If I find that it's not going to work like that, I go back to the tests and adjust. But the point is that once it's done, it's done. So often in our field we joke about the definition of the word "done." Does that mean the developers are handing it off to QA? That QA is handing it off to UAT? That the business has signed off? With TDD it's simple. All green tests = done. I finish coding a feature, press a button to run the full end-to-end automated integration suite of tests, take a break for a couple of minutes, and it's done.

And what's more, the tests are slowly becoming a specification document in and of themselves. Test method names like AnEventShouldHaveAParentCalendar() and AnEventShouldHoldZeroOrMoreSessions() and AnEventShouldHaveNoOverlappingSessions() sound a lot like requirements to me. And I keep adding more of these requirements. Once in a while, when developing in the domain, I'll realize that I've made an assumption and that I need to write another test to capture that assumption. How often does that happen in "real projects"? (Sure, you "document the assumption." But where does that go? What effect does that have? I wrote a test for it. If the test continues to pass, the assumption continues to be true. We'll know the minute it becomes false. It's baked into the system.)

Think about it in terms of other professions... Does the aircraft manufacturer not have time to test the airplane in which you're flying? Does the auto mechanic not have time to test your brakes that he fixed? Or, even closer to home with business software, does your accounting department not have time to use double-entry bookkeeping? Are you really paying those accountants to do the same work twice? Yes, yes you are. And for a very good reason. That same reason applies here.

I've been spouting the rhetoric for years, because I've known in my heart of hearts that it must be true. Now on this side project I'm validating my faith. No time to test? Honey, I don't have time not to test. And neither do you if you care at all about the stability and success of your software.

No comments:

Post a Comment