Colleagues and clients alike have been debating the nature of "agile development" lately. Everybody seems to
to "do agile" but for some reason it's just not working. Why is that? We're creating user stories; We're assigning story points; We're arranging work into iterations; and We're holding meetings every day which clearly have the word "Scrum" in the Outlook meeting invite. So what are we missing?
It's a fine effort, but it's missing the point entirely. Calling something "agile" doesn't make it "agile." Let's enumerate a few facts about this very common series of mistakes:
Agile, at its core, is the way in which we deliver the product(s) that we build. And for most enterprises it appears to be a frighteningly fundamental shift in how they expect the product(s) to be delivered. Because, ultimately, the biggest change is the delivery itself.
I'm talking, of course, about continuous delivery. The notion that each iteration delivers a complete and working product each and every time. Now, naturally, there will be bumps along the way. Nobody is perfect. So you can expect some iterations to fall short. And that's where the learning, adjustments, and... well... agility come into play.
But the words "complete" and "working" are key here. And they represent the fundamental culture shift in the delivery of the software which proves to be a significant stumbling block in the adoption of this process within a corporate culture. "Working" isn't difficult to understand, it just means that the product should essentially be close to defect-free and should functionally perform as expected. But "complete"? How is that even possible?
Therein lies the fundamental change... Defining (or, rather, re-defining) the word "complete." In this sense, "complete" changes with each passing iteration. I understand that the business has in their minds an image of what the "complete" solution should look like. What it should do, how it should behave, etc. They may have even spent a lot of time and money documenting every nuance of what the "complete" solution should do. And that's
awesome. We can use that.
However, we can't use
only that. Let's assume that, for all intents and purposes, it's going to take a team of people a whole year to develop this solution. The business agrees, the developers agree, everybody is optimistic that this is a good estimate. (Let's assume for a moment that accurately estimating an
entire year of work is even possible, which I contend is not, but that's another issue entirely.) Then what? Should the developers just hide away in a cave for a year and eventually emerge with a complete solution that matches the specifications?
Clearly not. This would carry with it the
enormous risk that the developers misunderstood the specifications, or the specifications misrepresented the core business need, etc. In this
drastically waterfall process this would result in wasting an entire year of expensive effort on something that didn't meet the needs. No, we want to identify issues early (and often). We want to know when something is wrong as soon as possible so we can correct it with as little impact (and cost) as possible.
After all, as much time and money went into that specification, I
guarantee that it will have flaws. There will be vague notions open to interpretation, or logical inconsistencies and mutually exclusive requirements. It may be difficult for the business (or certain members therein) to accept, but their gold standard document is
not what's going to be built. It's just a document. And it's a document that was created at the
earliest possible point in the
Cone of Uncertainty.
(I'm kind of reminded of a quote from Steve Jobs: "Everything around you that you call life was made up by people that were no smarter than you." If you've ever done government contracting or, God help you, military contracting than you've encountered this before. Tons of time and effort and money was sunk into creating that gold standard specification. It was designed and approved by
Top Men. So it must be flawless, right? Hardly.)
So we turn to agile. In agile we want to keep the business involved. After all, this is their project. They're spending a lot of money on this and they should be able to control it. (Not micro-manage it, hopefully. We've all seen that before as well.) If something is going in the wrong direction, they should have the reins in hand to steer the project in the right direction quickly and easily. That, naturally, is agility.
So how do we achieve that level of steering control from the business? (I know! Let's have regular meetings and call it a "Steering Committee"! It has the word "steering" in it, so it's what we need to do, right?) Well, in order to do that we have to give the business the useful information they need. We have to
deliver something to them, every step of the way. Based on those deliverables they can track progress and make sure we're still going in the right direction. They can
steer our course into that Cone of Uncertainty.
So... What are we delivering?
This is where that fundamental culture shift takes place. Traditionally, we're delivering project plans and status reports. We took that year of work, broke it down into chunks, broke those down into pieces, broke those down into bite-sized morsels, and so on. Then we arranged them into iterations or sprints or whatever we feel like calling them. Each one was assigned a weight, and we distributed the weights across the iterations. This gives us our
burn-down which tracks our progress. We now have numbers to show our progress.
So... What are we delivering?
Can the business really steer us based on just those numbers? They're great numbers, don't get me wrong. (Until they lie, and charts and reports can always be made to lie.) But what information do they give the business? They tell the business if the project is delayed vs. on-schedule, and that's important. But they don't really address the uncertainty, do they? What happens when new information is discovered and adjustments have to be made? What happens when things become blocked and get shifted around and now the bite-sized morsels are out of order and dependencies aren't in place when they need to be? More to the point, what happens when
specifications need to change as our collective understanding of the product grows and evolves with the business? Can we reflect that in numbers?
Perhaps, but not accurately. And not in a way that really helps with the uncertainty. We're just delivering documents in response to that original gold standard document. What we're
not delivering is an actual product that the business can sink its teeth into and really, truly know what's happening.
We missed a step.
Arranging work items into iterations was a step in the right direction, but it didn't get us where we really needed to be. We also needed to define
deliverables for those iterations.
Business-visible deliverables. Consider these two iteration outcomes:
- In Iteration 4 we completed the data access layer for the domain object repositories.
- In Iteration 4 we completed the form for submitting an order to the system.
Perhaps both of these components had the same weighted values. They required the same amount of effort. They provided the same numeric impact on the burn-down. But which of the two can the business actually use to track the progress of the product? Not the progress of the project, the progress of the product.
Not the progress of the project, the progress of the product.
The deliverable for the data access layer is important and it needs to be done, but in its entirety it doesn't give the business anything they can use to track the product. It's too horizontal to the overall system, and the business really needs things to be more vertical. They need features to be completed, not technical implementations. The deliverable with the order form completes a feature and gives the business something they can see.
Therein lies the "completeness" of the deliverable. If all we built was the data access layer, what are we delivering to the business at the end of the iteration? Nothing. It doesn't work. It doesn't
do anything. Deliverables like this keep us in a perpetual state of being "almost done" with the product. (Which can very easily lead to a perpetual state of being "almost done" with the
project, and that's a very bad place to be.)
We can't just keep pushing forward on every feature across the entire horizontal board of the product. We need to complete individual features through the vertical of the product and deliver those as discrete
complete working products.
What is "complete"? Complete is a product that somebody can use. Complete is not a system component that the developers claim is correct. The business knows that, at the end of the year, they want a "complete" working system for their new line of business. They know that in said system they should be able to register a user, log in, browse the catalog, submit orders for products, track the statuses of orders, submit requests for help, administratively manage the products, etc., etc. That's "complete" at the end of the
year.
So what's "complete" at the end of the
iteration? First they get a system where they can log in. (Of course it's just with a single test user account, since there's no registration feature yet.) Does it work? Does it look right? Do the mechanisms for logging in perform as expected? Sweet! Next iteration...
Now they get a system where they can register and log in. Still on track? Cool, let's keep going...
Next iteration they get a system where they can register, log in, and browse the catalog. (Of course it's just with a handful of test products, since there's no product management feature yet.)
Next iteration they get a system where they can register, log in, browse the catalog, and submit an order.
And so on, as opposed to this...
First they get some architectural framing in place. We're creating the business objects which contain the logic of the system and adding our interfaces for the various peripheral components of the system. There's nothing to test yet.
Next iteration they get the architectural framing and the login functionality, so they can at least log in to the system, but there's really nothing to see yet. Some of the pages have been started, but there's no data persistence so don't expect them to work yet.
Next iteration they get the architectural framing, the login functionality, and more work on the various forms. We plan to write the data access layer in the next iteration, then stuff should start working.
Next iteration they get the architectural framing, the login functionality, the data access layer, and we're on average about 15% done with any given form. So there's still nothing they can really be expected to work.
And so on. In this case we're not really
delivering anything. We have a burn-down chart and lots of numbers to assure everybody that we're still on track, sure. We're on track with the
project. But how is the
product doing? Does anybody know?
This strictly horizontal approach leads to a host of other problems as well. Stop me if you've never heard any of these before:
- "We just discovered a problem with implementing that feature. It doesn't really fit into the architecture. We need to adjust the feature because we've already put too much effort into the architecture and it can't change now."
- "There's a cross-cutting dependency on an external system. Nearly every part of the application needs it. We've been developing all of the partial components across the whole application while we wait for that system. But we just found out that the external system is going to change. They went with a different vendor. So we need to re-write a lot of this."
- "Development was on schedule, QA went very well, but UAT isn't going so well. The application meets the requirements, but the users just aren't happy with it. Some of them are complaining that it breaks their workflow or that it doesn't meet their operational needs. Shouldn't all of that have been outlined in the requirements?"
Why did we build a rigid architecture across all of the features before we examined those features for more details? Why did we deliver partially-complete features without a required system on which they depend? Why did we wait until the very end of the project before letting anybody see or interact with the product?
Instead, we should be delivering a complete and working product (not year-complete, iteration-complete) each and every time. The architecture should be the simplest possible to support the implemented features; The features should be complete with all dependencies accounted for (push up the priority of that decision on a vendor, or wait to develop the features that require it); and The users (or business desicion-makers of some sort) should be in the system every iteration to ensure the progress of the product.
The technical team must be willing to break apart the product into discrete deliverable features, and the business team must be willing to accept those discrete completed features on a regular basis. The more regular, the better. If the technical team wants to hide away in a cave while developing the whole system, or if the business team doesn't want to be bothered by the product until the whole thing is year-complete, then who is responsible when expensive changes are made (either through shifting requirements, discovery of new details, etc.) late in the project?
An iteration isn't complete without a delivery. Assess, prioritize, build, deliver. Every time.