Sunday, March 10, 2013

Details Are Important

There's a potential for a runtime error, so I should log it...
This is in a catch block, so I also have the additional context of an exception. So let's add another function argument to use the overload...
:::sigh:::

Thursday, March 7, 2013

The Only Way to Go Fast is to Start Well

As happens from time to time, there's a cross-blog debate stirring in response to something somebody posted. It's not quite as widespread as The Software Craftsmanship Debate from a year (or two? has it been that long) ago, but it's spreading. The catalyst this time was a post by Robert Martin entitled The Start-Up Trap.

The article is somewhat emotionally charged and comes across as being a bit dogmatic, which is Martin's style in general. (I'd be lying if I said I didn't exhibit a similar style.) But looking past the tone one can find the point pretty plainly... Businesses often delude themselves into believing that they don't need to write their software well. They're not "at that stage" or they need to focus their efforts on other things.

(What's worse, many times it's not the business which actually makes this decision. Many times it's made by the developers on behalf of the business without the business actually knowing. But that's another issue for another time. The point there is about professional behavior, and Martin has a whole book on that called The Clean Coder.)

Reacting to the dogmatism, there have been various responses to his post which try to focus more on pragmatism. One such post by Greg Young caught my attention as being one of the more equally-emotionally-charged responses. Another by Jimmy Bogard boils down to a much more level-headed statement, which I think really captured the pragmatism effectively.

And so, to reply to accusations of favoring dogmatism over pragmatism, Martin wrote a response. I feel like this settled the dust a bit, which is good. After all, we're talking about pragmatism. Do what's in the best interests of the business, that is the bottom line. There is no room for dogma on the bottom line. Or, to put it another way:
One developer's desire to be right does not outweigh the business' desire to reduce operational costs.
But in all of the back-and-forth, I think there's a very important issue to which we have only been alluding. And this issue is very much at the core of the communication (or, in many cases, the miscommunication) between all of the advocates on this debate. That is, if the question is:
When does TDD slow you down?
Then the answer is:
When your code isn't designed to be tested.
This, I think, is the great misconception around whether TDD speeds you up or slows you down. Young's article alludes to this when he describes his business venture and the application they prototyped. Martin validated this when he talked about a small one-off utility he wrote for one-time use (where he "fiddled with it" more than "designed it"). Neither of these applications were meant to be tested.

One can argue about the business sense of whether or not an application should be designed to be tested. Young uses an argument which I feel is very often overused, making it difficult to discern when it genuinely applies (which it indeed may have in his case, as I said it's difficult to discern). The argument that "it worked therefore it must have been correct."

Yes, many businesses succeed with crappy software. (Just look at Microsoft. Sorry, couldn't resist.) So, if they succeeded, how could the software have been bad? Evidence (the success) suggests that it was good, right?

This argument is just that, an argument, because to my knowledge there are no real numbers on either side of the equation. There are no double-blind studies, no usable case studies which aren't affected by tons of other variables. Did the company succeed because of the software, or did they succeed through the blood, sweat, and tears of their tireless employees despite the software? Most importantly, could they have been more successful with better software? How do you define and quantify success without anything else to compare it to?

I'm moving into a tangent. Sorry, I do that sometimes. Back to the main point... TDD only works when it's applied appropriately. It's not a magic wand which always makes all software better. This sounds a lot like the pragmatism being proposed by the various responses to Martin's original post. Which makes sense. But it's only part of the message.

Many times, when somebody says that they don't have time for TDD or that it will slow down their project, it's because their code wasn't designed to be tested, which itself is probably the deeper and more important issue here. The developers who make this argument often find themselves faced with the same difficulties when trying to write tests for their code:

  1. The code needs to be re-designed to accommodate the tests.
  2. The tests are slow to develop and not very effective, not providing any real business value in and of themselves.
Well, when faced with these difficulties, of course you're not going to want to write tests. They're making things harder and who wants that?

The problem here isn't that TDD is slowing you down, it's that your code is slowing you down. (Again, I fully recognize that sometimes you just need to fiddle something together until it works, and the various bits of what we'd call "good architecture" aren't necessarily required.) You're decrying TDD because it doesn't work for you, and you're assuming that TDD is to blame for this.

TDD is not just something to put on a checklist. It's not just a line item added to the project plan.
  • Is code written? Check.
  • Are tests written? Check.
  • I guess we're done, then.
No, this is an oversimplification. If the code is not testable then what value are the tests going to provide? You're writing tests (or trying to, anyway), but you're not practicing TDD. So to decry the merits of something you haven't even tried is a bit misguided. (I'm reminded of religious zealots who decry science with no understanding of the scientific method. You know, speaking of dogmatism.)

One does not simply "write tests" to practice TDD. One must also write testable code. Without that, the tests will indeed appear to be a hindrance. This isn't because writing tests is slow or because TDD is all dogmatism with no pragmatism, this is because the code is already rotting. The misdirected attempt to add tests and claim that it's TDD is suffering as a result of that rot.

TDD is not alone in this. Any good practice, when one attempts to apply it as little more than a checklist item to an already rotting codebase, will appear to be a hindrance. After all, it's hard to fix something that's broken. It's especially hard when we can't admit that it's broken or don't understand why it's broken. Of course, if we don't fix it, it will only become more broken.

This is where the communication between the opposing sides of the debate tends to break down.
"Writing tests adds work to the project, it slows us down!"
"No, writing tests speeds you up, you just don't know it!"
In between these two statements is the difference in the codebase itself. And since the code is the one and only real source of truth, we should stop arguing and go look at the code. Any time we have this argument, the best thing we can do is sit down over the code and pair program. Maybe one of you is right for a reason unknown to the other one. Maybe the other one has a point the first one didn't previously see. Discover it in the code.

Friday, March 1, 2013

There is no Agile without Delivery

Colleagues and clients alike have been debating the nature of "agile development" lately. Everybody seems to want to "do agile" but for some reason it's just not working. Why is that? We're creating user stories; We're assigning story points; We're arranging work into iterations; and We're holding meetings every day which clearly have the word "Scrum" in the Outlook meeting invite. So what are we missing?

Clearly we're just not doing agile hard enough! Right?

It's a fine effort, but it's missing the point entirely. Calling something "agile" doesn't make it "agile." Let's enumerate a few facts about this very common series of mistakes:
  1. Agile is not a meeting that you have.
  2. Agile is not a project plan that you create.
  3. Agile is not the terminology that you use.
So then what is agile? If it's not the meetings we set up in Outlook and it's not the project artifacts that we create, well, what else is there?

Agile, at its core, is the way in which we deliver the product(s) that we build. And for most enterprises it appears to be a frighteningly fundamental shift in how they expect the product(s) to be delivered. Because, ultimately, the biggest change is the delivery itself.

I'm talking, of course, about continuous delivery. The notion that each iteration delivers a complete and working product each and every time. Now, naturally, there will be bumps along the way. Nobody is perfect. So you can expect some iterations to fall short. And that's where the learning, adjustments, and... well... agility come into play.

But the words "complete" and "working" are key here. And they represent the fundamental culture shift in the delivery of the software which proves to be a significant stumbling block in the adoption of this process within a corporate culture. "Working" isn't difficult to understand, it just means that the product should essentially be close to defect-free and should functionally perform as expected. But "complete"? How is that even possible?

Therein lies the fundamental change... Defining (or, rather, re-defining) the word "complete." In this sense, "complete" changes with each passing iteration. I understand that the business has in their minds an image of what the "complete" solution should look like. What it should do, how it should behave, etc. They may have even spent a lot of time and money documenting every nuance of what the "complete" solution should do. And that's awesome. We can use that.

However, we can't use only that. Let's assume that, for all intents and purposes, it's going to take a team of people a whole year to develop this solution. The business agrees, the developers agree, everybody is optimistic that this is a good estimate. (Let's assume for a moment that accurately estimating an entire year of work is even possible, which I contend is not, but that's another issue entirely.) Then what? Should the developers just hide away in a cave for a year and eventually emerge with a complete solution that matches the specifications?

Clearly not. This would carry with it the enormous risk that the developers misunderstood the specifications, or the specifications misrepresented the core business need, etc. In this drastically waterfall process this would result in wasting an entire year of expensive effort on something that didn't meet the needs. No, we want to identify issues early (and often). We want to know when something is wrong as soon as possible so we can correct it with as little impact (and cost) as possible.

After all, as much time and money went into that specification, I guarantee that it will have flaws. There will be vague notions open to interpretation, or logical inconsistencies and mutually exclusive requirements. It may be difficult for the business (or certain members therein) to accept, but their gold standard document is not what's going to be built. It's just a document. And it's a document that was created at the earliest possible point in the Cone of Uncertainty.

(I'm kind of reminded of a quote from Steve Jobs: "Everything around you that you call life was made up by people that were no smarter than you." If you've ever done government contracting or, God help you, military contracting than you've encountered this before. Tons of time and effort and money was sunk into creating that gold standard specification. It was designed and approved by Top Men. So it must be flawless, right? Hardly.)

So we turn to agile. In agile we want to keep the business involved. After all, this is their project. They're spending a lot of money on this and they should be able to control it. (Not micro-manage it, hopefully. We've all seen that before as well.) If something is going in the wrong direction, they should have the reins in hand to steer the project in the right direction quickly and easily. That, naturally, is agility.

So how do we achieve that level of steering control from the business? (I know! Let's have regular meetings and call it a "Steering Committee"! It has the word "steering" in it, so it's what we need to do, right?) Well, in order to do that we have to give the business the useful information they need. We have to deliver something to them, every step of the way. Based on those deliverables they can track progress and make sure we're still going in the right direction. They can steer our course into that Cone of Uncertainty.

So... What are we delivering?

This is where that fundamental culture shift takes place. Traditionally, we're delivering project plans and status reports. We took that year of work, broke it down into chunks, broke those down into pieces, broke those down into bite-sized morsels, and so on. Then we arranged them into iterations or sprints or whatever we feel like calling them. Each one was assigned a weight, and we distributed the weights across the iterations. This gives us our burn-down which tracks our progress. We now have numbers to show our progress.

So... What are we delivering?

Can the business really steer us based on just those numbers? They're great numbers, don't get me wrong. (Until they lie, and charts and reports can always be made to lie.) But what information do they give the business? They tell the business if the project is delayed vs. on-schedule, and that's important. But they don't really address the uncertainty, do they? What happens when new information is discovered and adjustments have to be made? What happens when things become blocked and get shifted around and now the bite-sized morsels are out of order and dependencies aren't in place when they need to be? More to the point, what happens when specifications need to change as our collective understanding of the product grows and evolves with the business? Can we reflect that in numbers?

Perhaps, but not accurately. And not in a way that really helps with the uncertainty. We're just delivering documents in response to that original gold standard document. What we're not delivering is an actual product that the business can sink its teeth into and really, truly know what's happening.

We missed a step.

Arranging work items into iterations was a step in the right direction, but it didn't get us where we really needed to be. We also needed to define deliverables for those iterations. Business-visible deliverables. Consider these two iteration outcomes:

  1. In Iteration 4 we completed the data access layer for the domain object repositories.
  2. In Iteration 4 we completed the form for submitting an order to the system.
Perhaps both of these components had the same weighted values. They required the same amount of effort. They provided the same numeric impact on the burn-down. But which of the two can the business actually use to track the progress of the product? Not the progress of the project, the progress of the product.
Not the progress of the project, the progress of the product.
The deliverable for the data access layer is important and it needs to be done, but in its entirety it doesn't give the business anything they can use to track the product. It's too horizontal to the overall system, and the business really needs things to be more vertical. They need features to be completed, not technical implementations. The deliverable with the order form completes a feature and gives the business something they can see.

Therein lies the "completeness" of the deliverable. If all we built was the data access layer, what are we delivering to the business at the end of the iteration? Nothing. It doesn't work. It doesn't do anything. Deliverables like this keep us in a perpetual state of being "almost done" with the product. (Which can very easily lead to a perpetual state of being "almost done" with the project, and that's a very bad place to be.)

We can't just keep pushing forward on every feature across the entire horizontal board of the product. We need to complete individual features through the vertical of the product and deliver those as discrete complete working products.

What is "complete"? Complete is a product that somebody can use. Complete is not a system component that the developers claim is correct. The business knows that, at the end of the year, they want a "complete" working system for their new line of business. They know that in said system they should be able to register a user, log in, browse the catalog, submit orders for products, track the statuses of orders, submit requests for help, administratively manage the products, etc., etc. That's "complete" at the end of the year.

So what's "complete" at the end of the iteration? First they get a system where they can log in. (Of course it's just with a single test user account, since there's no registration feature yet.) Does it work? Does it look right? Do the mechanisms for logging in perform as expected? Sweet! Next iteration...

Now they get a system where they can register and log in. Still on track? Cool, let's keep going...

Next iteration they get a system where they can register, log in, and browse the catalog. (Of course it's just with a handful of test products, since there's no product management feature yet.)

Next iteration they get a system where they can register, log in, browse the catalog, and submit an order.

And so on, as opposed to this...

First they get some architectural framing in place. We're creating the business objects which contain the logic of the system and adding our interfaces for the various peripheral components of the system. There's nothing to test yet.

Next iteration they get the architectural framing and the login functionality, so they can at least log in to the system, but there's really nothing to see yet. Some of the pages have been started, but there's no data persistence so don't expect them to work yet.

Next iteration they get the architectural framing, the login functionality, and more work on the various forms. We plan to write the data access layer in the next iteration, then stuff should start working.

Next iteration they get the architectural framing, the login functionality, the data access layer, and we're on average about 15% done with any given form. So there's still nothing they can really be expected to work.

And so on. In this case we're not really delivering anything. We have a burn-down chart and lots of numbers to assure everybody that we're still on track, sure. We're on track with the project. But how is the product doing? Does anybody know?

This strictly horizontal approach leads to a host of other problems as well. Stop me if you've never heard any of these before:

  • "We just discovered a problem with implementing that feature. It doesn't really fit into the architecture. We need to adjust the feature because we've already put too much effort into the architecture and it can't change now."
  • "There's a cross-cutting dependency on an external system. Nearly every part of the application needs it. We've been developing all of the partial components across the whole application while we wait for that system. But we just found out that the external system is going to change. They went with a different vendor. So we need to re-write a lot of this."
  • "Development was on schedule, QA went very well, but UAT isn't going so well. The application meets the requirements, but the users just aren't happy with it. Some of them are complaining that it breaks their workflow or that it doesn't meet their operational needs. Shouldn't all of that have been outlined in the requirements?"
Why did we build a rigid architecture across all of the features before we examined those features for more details? Why did we deliver partially-complete features without a required system on which they depend? Why did we wait until the very end of the project before letting anybody see or interact with the product?

Instead, we should be delivering a complete and working product (not year-complete, iteration-complete) each and every time. The architecture should be the simplest possible to support the implemented features; The features should be complete with all dependencies accounted for (push up the priority of that decision on a vendor, or wait to develop the features that require it); and The users (or business desicion-makers of some sort) should be in the system every iteration to ensure the progress of the product.

The technical team must be willing to break apart the product into discrete deliverable features, and the business team must be willing to accept those discrete completed features on a regular basis. The more regular, the better. If the technical team wants to hide away in a cave while developing the whole system, or if the business team doesn't want to be bothered by the product until the whole thing is year-complete, then who is responsible when expensive changes are made (either through shifting requirements, discovery of new details, etc.) late in the project?

An iteration isn't complete without a delivery. Assess, prioritize, build, deliver. Every time.