Sunday, July 22, 2012

Pick Any Two Constraints

Most of us are familiar with an industry adage:
Faster, cheaper, better... Pick any two.
The idea is simple. You want something soon (faster), you don't want to spend a lot (cheaper), and you want it to be a quality product (better). However, you're not going to get everything you want. Reality doesn't work that way. And so you choose the two that you want more and settle for those.

  • If you want your software delivered quickly and to be of high quality, it won't be cheap.
  • If you want your software cheap and of high quality, it won't be delivered soon.
  • If you want your software delivered quickly and cheap, it won't be of high quality.
I've encountered plenty of people in the industry who are familiar with this, but for some reason it never sinks in with customers. Maybe project sponsors are just too accustomed to getting everything they want that they can't imagine having to pick something to sacrifice? That's a cop-out answer, so no. Or, perhaps we're not communicating the concept in a way that they truly understand? This is a much more productive approach, because it puts the onus on us to better facilitate that communication, which is our professional responsibility.

So instead of choosing which two you want, instead think of it like this:
Schedule, budget, quality... Choose the two by which the project is constrained.
Unless the project leadership has a complete and unerring understanding of the entire project (which I've never seen happen, so let's just assume for a moment that it's not realistic), the project will require some "wiggle room." In this triangle of constraints, it's going to move along an axis somewhere:


The good news is, you get to decide where:


So which two of the three axes will be fixed for the duration of the project, and which one will be allowed to fluctuate? That's for the business to decide. The most important part is that this be communicated explicitly and without confusion before work begins. It's easy for assumptions to be made when communication isn't explicit, and attempting to scope and document those assumptions is a fool's errand.

Ah, but there's a catch. (Isn't there always?) One of these three axes is a lot more difficult to quantify than the other two. How do we, as an industry, measure software quality? What metrics do we employ? It's a bit of a difficult question, isn't it? Now take it a step further and try to answer that question in a way that a non-developer can understand. We can talk about unit tests and code coverage and continuous integration all day long, but a project sponsor isn't going to hear us. "It's all Greek to him" and MEGO Syndrome sets in quickly.

This is a fair response. After all, how much attentiveness and interest is the developer going to be able to maintain if the project sponsor steers the discussion toward budget forecasts and marketing predictions? I can assure you that the moment somebody breaks out some spreadsheets to talk about finances, my attention span is entirely dissolved. (Some of you may enjoy such topics, of course. But I'm sure there are others which thoroughly glaze your eyes and put you to sleep. So the point remains.)

Ay, there's the rub. And I wish I had an answer right now, but I don't. Not yet anyway. So the challenge is ours to defend our axis of quality. However, the decision itself still belongs to the business. Just because we can't quantify it doesn't mean we should sweep it under the rug.
If you constrain the project by schedule and budget, quality will slip.
Plain and simple. The stakeholder(s) will ask us to quantify that, and right now we can't. (If you can, please speak up. The industry needs to know what you have to say.) We can quantify schedule, and we can quantify budget. Those are pretty clear and understandable metrics. And so (unerringly in my career so far) the business will choose those two metrics as the fixed axes. Those are what they understand, so those are what they choose to control.

The quality axis isn't in their scope of control, so choosing that one would rely them very heavily on you to control it. It would leave them with no guarantee that they're controlling any more than one axis, which is no comforting situation at all. So, again, the onus is on you to communicate and "sell" this idea. At least for now.

The critical piece, however, is that the idea is at least communicated. The business has every reason and every right to sacrifice quality in order to meet schedule and budget. It's unfortunate, and it leaves a sour taste in our mouths as engineers, but it's their decision and not ours. Don't make the decision for them. Don't speak only in terms of two constraints.

As professionals it's not only our responsibility to craft a quality product, it's even more importantly our responsibility to communicate the risks to that quality when discussing the product with the business. And, conversely, as professionals it's the responsibility of the business to understand the reality of what we're communicating (provided we can communicate it effectively.) Or, to put it another way... They won't be mad if you set the expectation. They'll be mad if you make their decision for them.

Saturday, July 14, 2012

Observations of an Offshore Team

There's certainly no shortage of offshore teams in software development. The stereotype of an Indian sweatshop of programmers and the garbage code they produce is well known throughout the industry. And I'd be lying if I said I hadn't seen that stereotype in action again and again. But I can't let that sour my perception of all offshoring in general. Particularly because my company now has an office in the Philippines for providing low-cost development work.

First of all, the Philippines is a far cry from India. The country and the people are vastly different in every way. I'm not going to compare apples and oranges, and besides that's outside the scope of these observations anyway. I was in the Philippines, not in India. So I can't speak to the latter. But hopefully what I've learned in working with the former can be applied somewhat universally.

The difference between working remotely with an offshore team and actually going there is profound. The perceptions, from both sides of the proverbial fence, change entirely when sitting in the same room. And I can't recommend the experience enough for any company which partners with an offshore team. Naturally, it would be cost-prohibitive to always work face-to-face in this situation. But sending at least one or two representatives to work directly with the team makes all the difference in the world.

If any company or team expects that they can simply toss requirements over a fence and expect production software to be tossed back over at a specified date, they're deluding themselves. It's an absurd notion to begin with, and years of industry headaches have confirmed that. Just being able to have casual conversations over lunch or dinner regarding the business domain and intent of the project can save untold costs in what would otherwise have been software that didn't capture the actual business intent. Understanding the why is vastly more important to software development than meeting a set of defined specifications.

There's also the psychological aspect to it. A team is more of a team when they know each other. Prior to this trip, everyone in the Manila office was a name and sometimes a voice, nothing more. An IM window, an email, a conference call... That was the extent of it. Now they're not just names on a form somewhere. Now I dare say I can call them friends. Or at the very least colleagues. A forced workplace joke over a conference call pales in comparison to a Friday night out to grab a drink or two. The social aspect of the team helps us to communicate and to understand each other more readily. The "fence" is torn down.

(As an aside, I regret to point out that I wasn't able to socialize nearly as much as I had hoped. So the fence is still partially there. Oddly enough, I ended up socializing more with members of adjacent teams than with my own, which is a bit unfortunate. But even going out to lunch with my team from time to time was still markedly better than IMs and emails.)

But even just the perception from "our side of the fence" entirely changes when you go there and meet the people on the other end of the emails. It's one thing to memorize someone's name, but it's another thing entirely to shake their hand and have a conversation with them. It's one thing to be numerically aware of the time zone difference, but it's another thing entirely to work the same shift as them and experience what it's like to have time-shifted hours. (After all, it's easy for us to label a rowdy background din on a conference call as being unprofessional. But honestly, what do you think an office of software developers who are all friends with each other is going to be like after 9:00 PM?)

Then there's the change in perspective on the little things around the office. What does the office look like? How is it laid out? What sort of equipment are they using? What assumptions do we have about our own "normal" office environments actually translate to that office, and what assumptions don't? You might be surprised.

For example, there's a labor/tools anti-symmetry between office environments in the US and many office environments abroad. In fact, this anti-symmetry is specifically why companies in the US delegate work abroad. To put it simply, and hopefully not disrespectfully... people are cheap. In the US if there's something slowing down a team and it can be solved by the purchase of some technology, the choice is clear. Don't let anything get in the way of the people. Tools are disposable. But in many offices abroad, tools are expensive (more expensive than they are here, actually, due to varying market forces). But people are easy to find and easy to replace.

It's harsh, and it's distasteful to me, but it's a reality that pervades the industry. I like to think that my company is different, and we're aware that it's going to take some time to convince our new team members of that fact. As I've said many times before, I hate referring to people as "resources." People are individuals. They are contributors. They are team members. They are not "resources." And actually meeting your team and getting to know them helps a lot to make that distinction.

I really like our team. And I truly believe that there's real development talent there just waiting to be nurtured. I don't want to simply delegate tasks to them. I want to work with them. And that difference, I believe, will make all the difference in the resulting software. Maybe not on the first project, maybe not on the second, but in the long run.

Saturday, July 7, 2012

The 90/9 Rule

We've all heard of the 80/20 rule, right? It feels like it's one of the most commonly-cited principles in the software development industry, in my experience. And there are always subtle flavors of it when cited. More often than not, it seems like it's in reference to "getting bang for your buck" on a software project.

And getting "bang for your buck" is a rather important concept in business software. Purists like myself are always compromising to the almighty dollar. And the reason for that is simple... We don't own the software we write. The person who writes the checks makes the rules, plain and simple. And that person is very interested in quantifying the value of what they get for the money they spend.

To help illustrate "bang for your buck" in the world of software, I've come up with a rule of my own. I call it "The 90/9 rule." And it's essentially like Zeno's Paradox of Achilles and the Tortoise applied to software development. Think of it as such:

  • For a reasonable expense, you can achieve 90% of your intended goal.
  • To achieve 90% of the remainder (thus, 99% total), double that expense.
  • To achieve 90% of the remainder (thus, 99.9% total), double that expense.
  • ad infinitum...
A key thing to notice is that it never reaches 100%. No matter how much expense one incurs, one will always have a compromise somewhere. The only question one needs to ask is where one is willing to draw that line between expense and intended results.

This isn't to say that we're not going to deliver complete and valuable functionality. It just means that there will be some things which the business intended or imagined that won't quite fit into the project. It always happens. And I dare say that a primary root cause of delays and shoddy results on a project is when the business won't let go of that full 100% intent. They won't let the reality of what's happening alter their perception of what they want to happen.

Considering the 90/9 rule in a project brings that part of reality to light. By accepting the balance of expense vs. intent, the business can (should) more effectively manage expectations and focus the spending of that expense on the 90% that's realistically feasible within the scope of that expense. And, if more is needed, the business can balance that against the added expense.

As an illustration of this rule, consider the simple matter of system availability, or uptime:

  • 90% (one nine) is pretty easy to achieve and definitely within a reasonable expense.
  • 99% (two nines) is still reasonable, but requires a little more expense. A proper server vs. some resurrected old laptop for example.
  • 99.9% (three nines) is going to require considerably more. That's less than half a day of downtime per year. Consumer hardware at all probably won't cut it, nor will a consumer internet connection on which the server sits, etc.
  • 99.99% (four nines) is getting really expensive now. That additional small amount of availability (a difference of a few hours of downtime in a given year) is going to require redundancy across multiple sites.
  • 99.999% (five nines) will require solid redundancy. The expense to guarantee this level of availability is no small matter at all.
  • and so on...
The exponential growth in expense is easy to see. What's also easy to see is that 100% isn't on the scale. 100% doesn't exist. No amount of expense is going to account for every possibility. You can get very close to 100%, but you won't reach it. There will be a risk somewhere.

The illustration of high availability focuses on hardware, but the same holds for the availability of the software. How solid, validated, and bug-free does 90% reliable software need to be? How solid, validated, and bug-free does 99.999% reliable software need to be? You can imagine the amount of planning, testing, validating, refactoring, and overall hardening of the system that would be required to achieve that level. And you can imagine how expensive it's going to be, bringing in experts from every field used by the software (architecture, infrastructure, database, UI, etc.) in order to achieve that level.

Think about the 90/9 rule on your current project or your next project. How well does it map to your expectations vs. the project sponsor's expectations?

Monday, July 2, 2012

No Time To Test?

I recently had a conversation with a colleague and we briefly touched on the subject of TDD. It wasn't the focus of the conversation at all, so I didn't want to derail things when he said this, but I can't get past what was said...
Clients rarely want to pay for the extra time to have tests.
I'm not at all shocked by the statement. Quite the contrary, I'd be pleasantly surprised to find anything else. But the commonality of the statement alone doesn't justify it. If our clients are refusing to pay more for test suites in the software that we write, then it's a failure on our part to properly address the issue.

We all know how it goes. A client has a project they want done; We put together an estimate with a rough breakdown of the work; And inevitably we need to cut off a few corners to get the numbers to line up with expectations. What's the first thing to get dropped? "Well, our users can test the software, so we don't need all this testing in the project budget."

When a statement like that is uttered, this is what a developer hears:
We don't need you to prove that your software works. You can just claim that it does.
Seriously, that's exactly what I hear. But of course it never goes down like that. What does end up happening? We all know the story. Bugs are filed, users are impatient, proper QA practices aren't followed, and so on and so on. Time adds up. We spend so much time chasing the "bugs" that we lose time constructing and polishing the product itself. So the end result is something that barely limps over the finish line, held up by a team of otherwise skilled and respected people who can maintain the thin veneer of polish just long enough, already in dire need of clean-up work on day one.

So tell me again how we didn't have time to validate our work.

Recently I've been committing a lot of my free time to a side project. It's a small business (which in no way competes with my employer, mind you, for the record) where I've been brought in to help with technology infrastructure and, for the most part, write some software to help streamline operations and reduce internal costs. Since I essentially have carte blanche authority to define everything about this software, I'm disciplining myself to stick with one central concept... TDD

I'm not the best at it, and I'll readily admit to that. My unit tests aren't very well-defined, and my code coverage isn't as good as it could be yet. But I'm getting better. (And that's partly the idea, really... to improve my skills in this area so that I can more effectively use them throughout my professional career.) However, even with my comparatively amateur unit testing (and integration testing) skills, I've put together a pretty comprehensive suite on my first pass at the codebase.

And you know what? It honestly makes development go faster.

Sure, I have to put all this time and effort into the tests. But what's the outcome of that effort? The code itself becomes rock-solid. At any point I can validate the entire domain at the click of a button. Even with full integration tests which actually hit databases and external services, the whole thing takes only a few minutes to run. (Most of that time is spent with set-up and tear-down of the database for each atomic test.)

What do I need to work on in the code right now? Run the tests and see what fails. What was I doing when I got interrupted yesterday and couldn't get back to it until today? Run the tests and see what fails. Everything passes? Grab the next task/story/etc. The time savings in context switching alone are immense.

Then there's the time savings in adding new features. Once I have my tests, coding the new feature is a breeze. I've already defined how it's supposed to work. If I find that it's not going to work like that, I go back to the tests and adjust. But the point is that once it's done, it's done. So often in our field we joke about the definition of the word "done." Does that mean the developers are handing it off to QA? That QA is handing it off to UAT? That the business has signed off? With TDD it's simple. All green tests = done. I finish coding a feature, press a button to run the full end-to-end automated integration suite of tests, take a break for a couple of minutes, and it's done.

And what's more, the tests are slowly becoming a specification document in and of themselves. Test method names like AnEventShouldHaveAParentCalendar() and AnEventShouldHoldZeroOrMoreSessions() and AnEventShouldHaveNoOverlappingSessions() sound a lot like requirements to me. And I keep adding more of these requirements. Once in a while, when developing in the domain, I'll realize that I've made an assumption and that I need to write another test to capture that assumption. How often does that happen in "real projects"? (Sure, you "document the assumption." But where does that go? What effect does that have? I wrote a test for it. If the test continues to pass, the assumption continues to be true. We'll know the minute it becomes false. It's baked into the system.)

Think about it in terms of other professions... Does the aircraft manufacturer not have time to test the airplane in which you're flying? Does the auto mechanic not have time to test your brakes that he fixed? Or, even closer to home with business software, does your accounting department not have time to use double-entry bookkeeping? Are you really paying those accountants to do the same work twice? Yes, yes you are. And for a very good reason. That same reason applies here.

I've been spouting the rhetoric for years, because I've known in my heart of hearts that it must be true. Now on this side project I'm validating my faith. No time to test? Honey, I don't have time not to test. And neither do you if you care at all about the stability and success of your software.