Tuesday, October 29, 2013

Refactoring Screencasts V

There exist a host of excuses for why it took so long to get these finished. But they boil down to two:
  1. There is no quiet place to record at home.
  2. There is no quiet place to record at work.
The first one is being remedied as we speak, for I'm in the process of moving to a new house and there will be space to set aside for a make-shift "recording studio" in said house. (Which will basically be a table and chair in the basement with some heavy blankets draped around it for sound dampening. But it's something at least. Note, however, that this "being remedied" is a long and drawn-out process, to be followed by the holidays, so I may be quiet for a while. But I digress...)

The second one hadn't been a problem during the summer, when my work mainly involved travel and hotel rooms are notoriously quiet when one is alone. However, for some time now I've been "between projects" and mainly sitting around in the company's office. (Which is not normally where a consultant spends his time.) Again, normally this isn't a problem. We have a conference room for this sort of thing. But another large project has been much taken over our office's conference room, for reasons I'm not aware of but aren't so uncommon to bear going into.

Yesterday, however, the conference room was inexplicably empty. So I was able to knock out the remaining recordings for the Making Method Calls Simpler series in an afternoon. Hopefully they don't appear hurried as a result. In any event, here they are. Enjoy!

Rename Method

Add Parameter

Remove Parameter

Separate Query From Modifier

Parameterize Method

Replace Parameter With Explicit Methods

Preserve Whole Object

Replace Parameter With Method

Introduce Parameter Object

Remove Setting Method

Hide Method

Replace Constructor With Factory Method

Encapsulate Downcast

Replace Error Code With Exception

Replace Exception With Test

Next I'll move on to the Dealing With Generalization series of patterns.

Friday, October 25, 2013

The Left Turn At Albuquerque

Bugs Bunny is lost. Really lost. And looking at a map of his current location doesn't seem to be helping him.
You see, he should have taken that left turn back at Albuquerque. Unfortunately, he missed that turn and just kept going in the wrong direction. And kept going, and kept going. Where is he now? Is this place even on his map? One thing's for certain... Getting from where he is now to where he wants to be isn't going to be easy. His original intent assumed a left turn at Albuquerque, so by definition everything he's done since then has been wrong.

This happens to programmers a lot. Invariably they find their way to Stack Overflow to ask a question about where they should go next. And very often they get a series of comments similar to this:
"Why are you doing it that way? There's no need for that. You're doing something else wrong."
It seems unhelpful on its face, but it's exactly the problem the programmer is facing. He's trying to solve a problem that he shouldn't have in the first place. And he's getting frustrated by the fact that there isn't a readily available solution to the problem that he just invented.

How should Bugs Bunny get to where he's going? Well, we don't know. We'll never know, because he didn't tell us where he's going. Nor did he tell us where he came from. We don't know the problem domain that he's trying to solve.

The programmer needs to step back for a moment and examine the bigger picture. It's not that we don't want to help him solve his problem, it's that we don't know what the actual problem is. And we need to know that to be of any use. Sure, maybe we can help him with his current roadblock. Then he'll just come back in an hour trying to solve his next roadblock.

If Bugs Bunny's tunnel encounters a massive outcropping of rock, he's going to ask someone how to get around that rock. And maybe they'll show him. Great, now he's on the other side of the rock. But is he any closer to his destination? We don't know, because that's not what he asked us. He only asked us how to get around the rock.

The programmer took a wrong turn somewhere. Perhaps a very wrong turn. Perhaps somewhere a long time ago. We don't know. All we know is that the sequence of steps, with no weights assigned to them to provide any perspective, went something like this:
  1. Programmer performed Step 1.
  2. Programmer performed Step 2.
  3. Programmer messed up on Step 3.
  4. Programmer figured out how to perform Step A.
  5. Programmer figured out how to perform Step รพ.
  6. Programmer got stuck on Step ± and asked for help.
  7. Programmer became frustrated that the help didn't get him any closer to Step 10.
  8. Rinse, repeat.
We want to help this programmer. We really do. But we can't unless we know the actual problem he's trying to solve. Not the immediate roadblock he's facing right now, but the actual problem being solved.

I guess if there's any piece of advice I can give from this little rant, it's this...
  • Never assume that everything you've done until now has been correct.
  • Never assume that just because you got something to work it means you're any closer to your goal.

Saturday, September 21, 2013

Refactoring Screencasts IV

Continuing the series, here are the screencasts for Simplifying Conditional Expressions.  Enjoy!

Decompose Conditional

Consolidate Conditional Expression

Consolidate Duplicate Conditional Fragments

Remove Control Flag

Replace Nested Conditional With Guard Clauses

Replace Conditional With Polymorphism

Introduce Null Object

Introduce Assertion

Tuesday, September 3, 2013

Refactoring Screencasts III

I know, this took a lot longer than expected. But it's been an interesting summer employment-wise. Lots of travel, lots of experiences, etc. I finally finished this series though, and have continued well into the next one.

It's worth noting that I skipped three of the patterns in this series.
  • I skipped Self-Encapsulate Field because the differences in how inheritance work in C# vs. Java were getting in the way, and ultimately the C# version was essentially identical to Encapsulate Field.
  • I also skipped Duplicate Observed Data because a lot has changed over the years in terms of data binding and the tooling that's available.
  • And I skipped Replace Record With Data Class because in modern tooling it seemed really similar to Replace Array With Data Class. (Though it's likely that I missed a key point/difference.)

Without further ado, here are the videos for the Organizing Data patterns:

Replace Data Value With Object

Change Value To Reference

Change Reference To Value

Replace Array With Object

Change Unidirectional Association To Bidirectional

Change Bidirectional Association To Unidirectional

Replace Magic Number With Symbolic Constant

Encapsulate Field

Encapsulate Collection

Replace Type Code With Class

Replace Type Code With Subclasses

Replace Type Code With State/Strategy

Replace Subclass With Fields

Wednesday, July 24, 2013

Where The Money Comes From

The more corporate entities I encounter in my career, the more patterns start to emerge. And one set of patterns I've been particular keen to find are those which tend to cause agile adoption to fail. "Agile" itself has clearly moved from the periphery to the mainstream, at least as a buzzword. In today's world no manager wants to admit to a lack of agility after all. But beyond the buzzword is an actual adoption of software development practices which, more often than not in my experience, fail to meet the goal and really end up being the same old status quo with just new names for the meetings.

There are many reasons why agile adoption can fail. It can range from a manager/director/executive/etc. who simply "doesn't get it" to an entire corporate culture which is inherently opposed to it. However, in many cases these failures to adopt seem to stem from some common factors in how the projects themselves are treated at a financial level. In many cases, it seem that the very source of the money for the project is a make-or-break factor for a successful adoption of agile practices.

Now, let me preface this by clarifying that I'm not an accountant. While I'm confident that my own checkbook is balanced and my own finances are handled, I know almost nothing of corporate accounting and budgets and finance at that level. Honestly, it's all white noise to me. I simply have no interest in the subject. But there is one thing I do know. There are two very different kinds of budgets tracked in two very different ways:
  • One-time up-front costs
  • Periodic ongoing costs
They're tracked separately on whatever spreadsheets are used to track these things. And they're financed very differently throughout corporate accounting structures.

Now consider a software project in any corporate setting. There's an identified business need to make something or enhance something to address an operational concern. And so that project is going to need some money. It's going to need a budget of funds. Where does that budget come from?
  • One-time up-front costs
That budget is defined as a fixed figure for one single up-front "purchase" of the project. "For $1.5M we can add these new features to the software. It will take a team of X people Y months to complete." The problem is, that's not really how "agile" approaches things. We don't plan Y months in advance down to every detail. We can't. The very nature of what we're doing has identified that such a plan is inherently incorrect because at the very start of the project is when we know the least about it. But I'm not going to go into the details of what agile is and how it works in this article...

When you consider the structure of an agile software development environment, it's based entirely on iterations of work. Each iteration adjusts its structure based on the successes or failures of the previous iteration. Each iteration is an opportunity for the business to change direction or adjust priorities. Each iteration is an isolated and complete period of development in an ongoing effort. So, from where does it make more sense for the money to come?
  • Periodic ongoing costs
When asked "how long will it take to complete" or "what is the whole thing going to cost me" we often try to explain that "agile doesn't work that way." To someone who has a budget in one hand and a calendar in the other, that's not an acceptable answer. We have to go deeper than that. We have to engage at an earlier time, before the budget is defined, and guide the corporate entity not only on how to manage an agile team but even how to finance an agile team.

With a single up-front budget, the iterations are simply burning efforts against that budget. No matter how you slice it up, it's a countdown to "no more money" and a march toward "no more time." Whichever end-game is reached first becomes the deciding factor for the project, regardless of whether the product is ready or not. And, throughout that march, how often did we sell "agility" to the business in the form of changes to scope or priority? How much has the overhead of that cost us against the original budget? Can we still make it before the deadline?

But with on ongoing periodic budget, the control of when anything is finished is entirely in the hands of the business managing the project. It's no longer even a "project" per se, but rather simply a "capacity of work." Each iteration has a capacity, and a certain amount of work can be done in that iteration. As the periodic budget allows, capacity can be increased or decreased. There's really no "end-game" in this scenario, simply the ongoing capacity of work.

It's not that the managers of a failing agile adoption "don't get it." It's that an inaccurate direction was set before we got there. The questions they're asking ("when will it be done" and "how much will it cost") are directly derived from their budgeting concerns, which is what fundamentally drives the business. As an industry we've successfully brought agile into the development domain (even though most adoptions are still unsuccessful, the attempts at least indicate that the desire is there), but we have to continue to adapt to the rest of the corporate structure as well.

We have to not only develop our software with agility, we have to finance our software with agility.

Monday, July 15, 2013

Refactoring Screencasts II

Here are some videos I made for the second series of Martin Fowler's refactoring patterns, called Moving Features Between Objects. Enjoy!

Move Method:

Move Field:

Extract Class:

Inline Class:

Hide Delegate:

Remove Middle Man:

Introduce Foreign Method:

Introduce Local Extension:

Monday, July 8, 2013

Refactoring Screencasts

Probably the biggest reason why I've been quiet for a while is that I've been focusing my efforts on some more company-internal stuff recently. Most notably a Code Dojo for my colleagues. It's a ton of fun, and we're sort of feeling our way around how to make it work. (I'm finding a lot of good information in Emily Bache's book on the subject, too.)

Given the team's vast geographic disparities, we've tried a few dojo-ish off-shoot styles to fit a strictly online-only (Microsoft Lync mostly), including walking though various samples and tutorials and such. Essentially treating the whole thing as a collaborative learning space for whatever we want to learn or share.

In that format, one of the things we've walked through was Martin Fowler's refactoring patterns. And I promised my colleagues that I'd make some persistent screencasts of the patterns that can be retained going forward, mostly since it wasn't really proper dojo format and is less likely to be repeated. Well, I've finally had a chance to start recording them, so here's the first series walking through the Composing Methods patterns:

Extract Method:

Inline Method:

Inline Temp:

Replace Temp With Query:

Introduce Explaining Variable:

Split Temporary Variable:

Remove Assignments To Parameters:

Replace Method With Method Object:

Substitute Algorithm:

That's in for the first series.  More to come!

Tuesday, May 28, 2013

StructureMap Convention Scanner

Several jobs ago, an architect introduced his team (of which I was a member) to dependency injection, specifically using StructureMap. Admittedly it took me a while to get it, but once I did I became hooked and have been ever since. To this day I still prefer StructureMap, if for no other reason than I'm very familiar with it.

And over the years, my use of this tool has rarely seen drastic change. The pattern from that old team, which used the Common Service Locator (of which I use a smaller home-grown version), worked very well and has fit the bill for almost all of my control inverting needs. From time to time, however, I would come up with some new need and have to create something new to handle it.

To date, my biggest change was the use of a custom convention scanner. Nothing fancy, it just scanned assemblies and used my own custom naming convention for interfaces (I dropped the "I" and never looked back) and matched implementations to interfaces. The scanner itself looked like this:

public class DomainInterfaceNamingConvention : IRegistrationConvention
{
    public void Process(Type type, Registry registry)
    {
        if (IsntRegisterable(type))
            return;

        Type interfaceType = type.GetInterface(type.Name.Replace("Implementation", string.Empty));
        registry.AddType(interfaceType, type);
    }

    private bool IsntRegisterable(Type type)
    {
        return type.IsAbstract || !type.IsClass || !ImplementsACustomInterface(type);
    }

    private bool ImplementsACustomInterface(Type type)
    {
        foreach (var iface in type.GetInterfaces())
            if (iface.Namespace.Contains("Acme"))
                return true;
        return false;
    }
}

Simple enough. If the type is one of the types I want to register, it gets the interface that it's implementing based on the word "Implementation" at the end of that interface (my own convention, favored over configuration) and adds it to the registry. This has worked splendidly for years. Until I thought of something I wanted to support but couldn't with this.

This convention assumes something I'd always been assuming and had never bothered me. Namely that all instances are default instances. And in all fairness, I've never had a need for named instances. I was able to swap out different implementations by changing some custom configuration settings. The bootstrapper which references this convention scanner would dynamically build the assemblies based on those configuration settings.

So, let's say I have three implementations of a set of data repositories. Using a naming convention (favored over configuration again, and borrowed in large part from that same job long ago), the assemblies would be named something like this:

  • Acme.Infrastructure.DAL.SQLExpress
  • Acme.Infrastructure.DAL.XMLFiles
  • Acme.Infrastructure.DAL.Mock
These would be three valid implementations of my repositories, all transparent to the rest of the domain, and which one any given application instance uses would be a config setting.

But recently I found a situation where I might want the same application instance to use multiple implementations for the same dependency. Without going into too many details, let's say for the sake of argument that the application needs to move data from one place to another. If I can only use one implementation of a given dependency, then one of those two "places" would have to be a completely different dependency.

This led me down a distasteful path. Things which logically should just be repository implementations (because they're just persisting data) ended up being their own isolated dependencies, filled with DTOs that were littering my models.

The first example of this on a project was when I had to integrate some different calendar systems into our event data. We have a database for storing event data, and that's essentially the system of record for that data. It has repository implementations accordingly. However, the business wanted to manage events using a third party tool (in this case some crappy-but-functional desktop calendar application), and additionally wanted to publish events to a third party tool (in this case, Google Calendar).

Well, the DAL dependency was already taken up by the event data repositories. So I introduced a new dependency called CalendarManager and another called CalendarPublisher. I wrote implementations for them using these two third party systems, as well as mock implementations for testing. And essentially the process would be to read from the CalendarManager, persist to the Repositories, perform some domain logic, read from the Repositories, persist to the CalendarPublisher.

It worked, but it was distasteful. I already have models and repository structures for Events. These other dependencies should just be alternate implementations of the repositories for those models. But then one application instance wouldn't be able to use all three.

What I needed were named instances. But whenever I've seen named instances used in the past, they were on a class level instead of an assembly level. I don't want to have to specify every individual class in my bootstrapping code. For starters, that would favor configuration over convention which I don't want to do. But more importantly it would mean that any implementation or any interface that's added to the domain would have to be manually specified there. Unintuitive at best, error prone at worst. I'd much rather just have to specify the assemblies (of which there are several) instead of the classes (of which there are hundreds).

So how can I name my instances at the assembly level? How can I scan my assemblies and add them into the object graph from which I could essentially pull named instances like this?:

var eventSource = IoCFactory.GetInstance<EventRepository>("CalendarPlanner");
var eventDestination = IoCFactory.GetInstance<EventRepository>("GoogleCalendar");
var eventData = IoCFactory.GetInstance<EventRepository>();

Basically I'm looking to be able to specify an instance, or take a default. (In this case the default for the repositories would be the database implementation.)

After some back and forth on Stack Overflow and a lot of tinkering, I've ended up with this:

public class DomainInterfaceNamingConvention : IRegistrationConvention
{
    public void Process(Type type, Registry registry)
    {
        if (IsntRegisterable(type))
            return;

        var interfaceType = GetInterfaceType(type);
        var dependencyName = GetDependencyName(type);
        var implementationName = GetImplementationName(type);

        AddInstanceUnlessOverridden(type, registry, interfaceType, dependencyName, implementationName);
        AddInstanceOverrides(type, registry, interfaceType, dependencyName, implementationName);
        AddDefaultInstance(type, registry, interfaceType, dependencyName, implementationName);
    }

    private static void AddDefaultInstance(Type type, Registry registry, Type interfaceType, string dependencyName, string implementationName)
    {
        if (ConfigurationFactory.GetDefault(dependencyName) == implementationName)
            registry.For(interfaceType).Use(type);
    }

    private static void AddInstanceUnlessOverridden(Type type, Registry registry, Type interfaceType, string dependencyName, string implementationName)
    {
        if (!ConfigurationFactory.GetOverrides(dependencyName).Keys.Contains(implementationName))
            registry.For(interfaceType).Add(type).Named(implementationName);
    }

    private static void AddInstanceOverrides(Type type, Registry registry, Type interfaceType, string dependencyName, string implementationName)
    {
        if (ConfigurationFactory.GetOverrides(dependencyName).Values.Contains(implementationName))
            foreach (var dependencyOverride in ConfigurationFactory.GetOverrides(dependencyName).Where(o => o.Value == implementationName))
                registry.For(interfaceType).Add(type).Named(dependencyOverride.Key);
    }

    private static string GetImplementationName(Type type)
    {
        return type.Assembly.GetName().Name.Split('.').Last();
    }

    private static string GetDependencyName(Type type)
    {
        return type.Assembly.GetName().Name.Split('.').Reverse().Skip(1).First();
    }

    // etc.

}

Still favoring convention over configuration, I've continued with the assumption of my assembly names. Given that, it takes a minimal amount of reflection to get the name of the assembly for the implementation and use that convention to name the instances.

It was then irresistible to take it a step further and define some more custom configuration for these overrides. After all, one of the biggest reasons I have this setup is for testing. I like to create custom mock implementations for dependencies and then my testing instance (which has its own config file) can simply specify to use the mock implementations instead of the default ones. This is especially useful for automated integration testing because I can keep multiple config files for the tests and just have the build scripts deploy and run multiple instances of the test code with different config files. Thus isolating individual dependency implementations for testing while using mocks for everything else. This greatly reduces the number of variables in automated testing for me.

So now in that custom configuration section I can override defaults as well as override named instances. Thus, even if my code calls for this:

var eventDestination = IoCFactory.GetInstance<EventRepository>("GoogleCalendar");

I might decide to override that for a specific application, essentially telling it that "even if I ask you for this specific instance, give me this other one (such as the Mock) instead." Thus far this has prevented me from too tightly coupling the code asking for the implementations with the implementations themselves. The application isn't tightly bound to the actual implementation, it can override it. So as long as I keep my names fairly general, I'm happy with the level of coupling.

Not shown here is the implementation of ConfigurationFactory, which is local to my IoC implementation project. But basically all it does is check the config file (using a standard .NET custom config section implementation) for any specified defaults or overrides. So, for example, by default the non-named instance for the repositories might be the SQLExpress implementation. In the config file I might then say to use the XML one as the default instead, so non-named instances become the XML one for that application. I may then also take it a step further and configure it to use the XML one even when the SQLExpress one is explicitly requested. (Which doesn't happen in the codebase at this time, but it does for other implementations such as the aforementioned Event stuff.)

All in all, I'm pretty happy with this implementation. And I'll be a lot happier once I go back through the code and re-implement these custom dependencies as repositories which they should have been all along. This will reduce the number of service DTOs in the system to almost none, making use of the existing models instead. And it will get rid of several mock implementations since I can just re-use the one I already have for the DAL.

Tuesday, April 30, 2013

I Love My Surface RT, But...

Each holiday season my employer gets a "tech gift" for everybody in the company. And last year was quite possibly the biggest one yet, a Surface RT. Before I go any further, I want to tell you that I love my Surface RT. I loves it very much. I love the feel of it. I love its usefulness as a Netflix appliance (huzzah for the built-in kick stand!). I love writing code for it. I love that I work for an employer who buys us things like that. All in all, I am very happy with my Surface RT.

But...

Well, I've come to realize why I'm so happy with it. It's not because the Surface is a compelling product or provides a rich experience or anything like that. It's not because the ratio of price to value is so good. It's not because it's a good product. No. It's because I'm a tech geek. And, as a tech geek, I like tech toys. It didn't have to specifically be a Surface, it could have been anything. Any shiny new toy would have been great. Even look at how I worded it in the above paragraph... "I love my Surface RT." I don't love the product, I just love that I have one.

Additionally, I've come to realize that my colleagues and I, who all share this love for our Surface RTs, are in a distinct minority. Most people do not love this product. And, honestly, I don't blame them.

The most direct experience I've had with this is watching my family. I have a wife and three daughters. (Only two of whom are old enough to even use such a device. The infant would just drool on it.) For a while, I didn't let anybody use my Surface. We have an iPad, we have a Kindle Fire (flashed with a real version of Android of course, since that Amazon-only stuff was crap), we have a handful of iPhones between us, we have computers and devices and toys aplenty. So I kept the Surface for myself.

My daughters didn't like that, of course. "Daddy, can we play with the new big phone?" (They started calling the iPad a "big phone" when I first got it. They were younger and, well, it was just like Daddy's phone but bigger. The name has kind of stuck.) "Sorry honey, that one's mine. I need to do important stuff on it. You can use the other ones, though." And they were happy enough to do so, but that lingering desire for the new toy was always there.

One day, somewhat recently, I opened up relations with China. I allowed the family unfettered use of the Surface RT. I presented it to them, power cord in hand, and instructed them to go forth and enjoy the device.

That lasted only a few minutes. Literally.

First, I had to create a user account other than my own. After all, one of the cool features of Windows 8 devices is the multi-user setup. And since my Windows 8 account is used across a handful of other computers and devices, I didn't want the kids polluting my stuff. So I set about creating an account. This was a painful process. It was clunky, unintuitive, and just overall dismal. But I put up with it. (I don't have any specifics on it to share at this time, it was a short while ago but long enough that the details are forgotten. Just know that, while the OS walks you through the initial setup rather well, setting up additional users is jarring and unpleasant. At least it was for me.)

The process of setting up a user account for my older daughter actually lasted longer than her interest in the device. Again, literally.

My older daughter took hold of the Surface and began to play. First thing's first, she wanted to play Minecraft. (She loves playing it on the iOS devices and on the family computers.) Sorry, not available. The app store doesn't have one, and the full version doesn't run on RT. So she looked for other games. There... aren't many. In the commercials they highlight Angry Birds, which is available. But who the hell cares about Angry Birds anymore?

The only thing she could find to do with it was watch something on Netflix. (Honestly, that's about 95% of my use of it as well.) This is where the kick stand comes in handy. You can place it nearby (such as on a night stand) and watch your favorite movies and TV shows. At least until the battery runs out. Which will be soon.

This trend continued until everybody in the house completely lost interest in the device. It currently sits unused on a table, devoid of battery life because nobody cares enough to plug it in.

Again, I love this device. It's great, as a toy. As a family computing device, not so much. This is pretty evident in the fact that my family just doesn't give a crap about it. Remember how great the built-in kick stand was? Each time my older daughter gets something to eat and sits down at the table to watch Netflix, does she grab the device with the built-in kick stand? No. She grabs the iPad, leans it against something, and places a napkin between it and the table to create some static friction to keep it standing up.

I want so much for this to be a more compelling device. I want to write software for it. I want it to open up .NET development throughout the tablet and mobile spaces even more than Mono does. (Just listening to that last sentence cements the fact that I am a tech geek, not an average consumer.) I want it to be successful. But it's not. Microsoft apologists in general will happily tell you that "the next one will be better" but I've learned not to hold my breath on that. (Pick a Microsoft product, and I'll show you an apologist who has said this about the shortcomings for any given version of it.)

Today a colleague showed us an amusing web site called iPad Death Watch. Apparently the Microsoft apologists are thriving here. To give you an idea, this is my favorite quote about Apple's iPad on that page:
"What an utter disappointment and abysmal failure of an Apple product. How can Steve Jobs stand up on that stage and hype this product up and not see everything this thing is not and everything this thing is lacking?"
It's so delicious to read it must be fattening. Honestly, I can't tell if that page is serious or satire. Imagining listening to someone say all the same things as Stephen Colbert, but it's not Stephen Colbert. Are they a comedian or are they insane? It'd be difficult to discern.

Amid all of the comments on that page, however, is an infographic which, again, leaves me wondering if it's serious or satire. For posterity, here is the infographic in its entirety:


I'll continue to try to use the term "Microsoft apologist" instead of "Microsoft shill" but it's going to be difficult.

So let's dissect this infographic a piece at a time...

  • Multiple Users
    • Easier and safer to share single device
      • "Safer"? Maybe. "Easier"? Definitely not. The process was painful. Maybe I wasn't doing it right? Kind of like iPhone 4 users weren't holding it right? Ya, if that excuse doesn't work for Apple then it doesn't work for Microsoft either. The principle of least astonishment left much to be desired in the interface here.
    • Same Windows user account experience but in a fun tablet size kids will love
      • I couldn't help but emit an audible chuckle when I read that. For one thing, it sounds like they're marketing a pill. But more to the point, my kids don't love it. At all. This also implies that people inherently "love" the "Windows user account experience" in the first place. I contend that nobody cares. People appreciate the benefits it can provide, but they don't care about the "Windows user account" part of it.
  • Metro core of RT
    • Addresses market that will likely be much more popular than traditional PCs within the next few years.
      • That's a creative way to say, "Our competitors have already enjoyed billions in profit from this market in the past few years, so we think there may be something to it."
    • Isn't aimed to be a PC replacement (i.e. - Incompatible with many desktop applications, partial driver support)
      • You're... not the best salesman... are you? "Incompatible with many desktop applications" is a benefit? No, it's a pain in the ass. One thing I have watched my family do is go to "desktop mode," get excited that it's a full computer, try to install something, and see an error message saying they can't use that. (And to rub salt in the wound, the same error suggests visiting the Windows App Store. Which has, like, 12 apps in the entire store. 4 of which are Angry Birds. It's absurd.) This is not a feature. This is a failure.
  • Mouse
    • When you want to get real work done, nothing beats a keyboard and a mouse.
      • First of all, don't quote yourself in your marketing material. Quote somebody else. Quoting yourself in an attempt to get your own point across is... not awesome.
      • Second, what's with the focus on "getting real work done"? Just a moment ago we were being sold on the idea that this "isn't aimed to be a PC replacement." Now we're being told that it's better because it can replace the PC? This isn't a product, this is an identity crisis. It reminds me of every failed attempt Microsoft has ever made to put a Start Menu on a phone. (And there have been many.) Remember that "market that will likely be much more popular than traditional PCs"? (Also known as that market where Apple has, with a single division of their business, out-profited the entire Microsoft corporate empire.) Trying to shove traditional PCs into that market isn't the way to go.
      • Finally, that last bullet point where he shows a negative is a bit... out of place. "No Mouse and Keyboard Center-based customization software offered just yet." Um... ok. Thanks. I guess the next one will be better?
  • USB Port
    • Connect: External hard drives, printers, keyboards, mice
      • It sounds an awful lot like this is really trying to be a traditional PC. I don't think I've ever wanted to print something from my iPad.
    • Transfer camera files
      • No, just no. This is not my PC. For internet-connected devices, files transfer (or should transfer) fairly seamlessly. (See Photo Stream) For non-connected devices (such as traditional digital cameras), I connect them to my PC. The PC is my central hub. From there they get disseminated to my other devices. (See Photo Stream) I would never even think to plug my digital camera into my iPad. It's a ridiculous notion. Again, and this is all over this friggin' infographic... Are you comparing this device with a PC or with an iPad? Microsoft doesn't seem to understand that there's a difference.
    • Charge phones
      • HA! The battery life is horrendous enough as it is. And now you're going to encourage people to plug powered devices into it? I bet the Surface RT powers down before the phone is even charged. Go ahead and try to plug a USB-powered spinning disk drive (like my old WD Passport, which I love) into one of these. I give it 5 minutes tops.
  • Task Switching aka "Windows Flip"
    • Easily flip between programs with Alt+Tab
      • Average consumers don't use Alt+Tab. They don't know or care about it. I did just discover in testing this one that Alt+Tab does include "metro" apps in the task switching, so that's cool. Point to Microsoft for that small bit of convenience. (It is the little things that make the compelling interface, after all.)
    • Windows 8 and RT also offer Metro-style "Switcher interface" (Win Key+Tab)
      • Ya, but it's only for Metro apps. So they've created a second kind of task switching which behaves almost like the first one, but differently. And they both exist on the same device. That's kind of jarring, don't you think? Oh, and also note that Metro apps don't show up in the traditional task bar, where people expect to see their apps. I guess this "Switcher interface" is the second task bar for a new class of apps. On a technical level I can understand this and it doesn't bother me. As a developer, this makes sense. But consumers will think it's stupid.
    • Apple could close the productivity gap between its iPad and the Surface... by adding one critical missing feature to iOS: Simply allow users to task switch by using Alt+Tab
      • Did you quote yourself again? But I digress...
      • Have you ever even seen an iPad? There is no Alt, and there is no Tab. There is no keyboard. I can't stress this enough, the iPad is not a PC. This identity crisis for the Surface and what it actually is is starting to get old. Besides, the iPad has a task switcher. Swipe to the side with 3 fingers. That feature has been there for a while now.
  • Fully-Functional Microsoft Office
    • Office Home & Student 2013 RT
      • Ok, I actually really like this part. Well, I would if I used Office for anything. (I do for work, but don't use my Surface RT for work because ([clears throat]) it is not a PC. But for people who do want to use Office on a tablet, including it was a pretty cool thing to do.
    • Unlike Office for iPad - will require monthly Office 365 subscription
      • There's an Office for iPad? I guess I didn't notice what with the iOS productivity apps so readily available. Pages, Numbers, and Keynote. Sure, they're "limited-functionality mobile apps" but, you know, they're on a limited functionality mobile device. (Not a PC?) As someone who also has a Mac (sort of a PC?) they play nicely with my setup. Oh, and they don't require a subscription to whatever Office 365 is. So, yes, iPad doesn't run Office. It doesn't purport to. My car also doesn't run Office. Is the Surface better than my car? (Note: My car is also not a PC.)
I don't fault Paul for this. He's in every way a Microsoft guy, and Microsoft is sending very mixed messages with their attempts to break into the non-PC market. The only clear message they seem to be sending is that they truly believe (whether intentionally or through a lack of understanding of the world around them) that the way to move into the non-PC space is to bring PCs there.

So that was enough ranting, and I got a little too emotionally charged on some of those responses. Maybe I'm the one who "doesn't get it"? I don't know. But my family doesn't get it either. Nor does just about anybody else I've met. And by "get it" I mean "buy a Surface or a Windows Phone 8." It's just not something people do.

Again, I love my Surface RT. I just don't see why anybody else would.



NB: My older daughter's birthday is coming up and we've bought her a simple Dell laptop as her very first computer. It will be running Windows 8. I can almost guarantee that the moment she turns it on and sees the Metro start menu, she's going to feel a sense of disappointment and think it's as bad as "the new big phone." I'll try to salvage that. Putting Minecraft on it will be my primary weapon.

Friday, April 26, 2013

The Smartest Guy On The Team

Does your software team have a "wiz kid"? A "rock star"? Have you been lucky enough to find that one amazing developer who seems to be able to solve all of your problems in ways so creative and clever that nobody else on the team can even keep up with his brilliance?

If so, you'd better fix that.

We've all heard the saying before, "What if Rob gets hit by a bus tomorrow?" (Or whatever his name is. I would have used a ____ instead of a name, but then this post would end up with a lot of ____s in it. There's two of them already, and they don't look good. So we'll call him Rob. I don't think I've ever actually worked with a Rob, so nobody should misconstrue this as an actual historical tale.) Well, if Rob is your superstar go-to guy then that bus should scare the hell out of you.

The problem isn't that you'd be devoid of Rob. The real problem is that you currently have Rob. And you let him run amok in your code. (Sure, he writes the code and has a certain level of "ownership" in the sense that team members should take a sense of ownership over their work, but as the business owner you actually own it. It belongs to you. And, ultimately, you are responsible for it.)

I am not suggesting that you shouldn't hire smart people or that all of your software should be farmed out to the lowest bidder in some faraway land. Quite the opposite. What I'm suggesting is that (and this may be difficult to hear) Rob might not be as awesome as he tells you he is. Don't get me wrong, Rob might be a very intelligent guy. But that doesn't mean he writes good code. He might even right very clever, even downright brilliant code. But that doesn't mean it's good code.

This reminds me of one of my favorite quotes:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian Kernighan
Writing good code isn't a measure of brilliance or cleverness or creativity. Code is a tool. It's a utility to perform a task necessary to the business. In general, developers love to see clever and creating things. We're kind of geeky, and we really enjoy things like that. So it's often difficult for us to tone it down when writing actual production-quality code, but tone it down we must. Because the bottom line for production-quality code isn't cleverness, it's pragmatism.

But I digress, let's get back to the case with Rob. Consider a scenario...
Manager: Have you finished adding that feature?
Not Rob: It's turning out to be more difficult than expected. The code in that part of the system is really a mess, and the slightest changes are introducing all kind of unexpected effects.
Manager: I don't think that code is a mess. Rob wrote that whole module himself.
Not Rob: Well, regardless of who wrote it, it's a mess. The effort to add that feature is going to also involve fixing a lot of what's already there.
Manager: Nevermind, I'll just have Rob do it. He's better at this sort of thing.
It sounds eerily non-contrived, doesn't it? The characters could take any number of shapes:
  • Manager and Rob are old buddies and they'll be damned if anybody is going to tell them that their buddy did a bad job.
  • Rob helped found the company as the go-to tech guy and everybody just instinctively calls on him because he has a lot of clout.
  • Rob was a highly-paid consultant some time ago and the company never learned how to live without him.
  • Not Rob is young and Manager doesn't believe that anybody young can ever be more right than anybody old.
  • Manager doesn't know Rob, but knows that Rob was a legend when he worked there and Manager would rather bring him in as a consultant than have to deal with the mess himself.
  • etc.
Regardless of how the drama between the characters has unfolded, the situation is the same. If Rob gets hit by a bus tomorrow, Manager is lost without him. Now, I've been putting down Rob quite a bit in this post, and that might not be fair. I'm just reacting (in a somewhat Pavlovian way, perhaps) to the Robs with whom I've worked in the past. Most of them were genuinely useless and drove any codebase they owned into the ground. Some, on the other hand, were genuinely good (even perhaps brilliant in subtle ways) and I hope I learned something from them. Of course, while I like to think I can tell the difference, it's clear that Manager can't.

So, when you have a Rob, what you have is one of two things (or perhaps both):
  • Rob writes terrible code, he just somehow convinces you that it's good. Without your knowledge or consent he is amassing on your behalf a ton of technical debt. And someday you're going to have to pay that debt.
  • Rob is employing coding techniques with which your other team members are unfamiliar or for whatever reason they do not understand. While Rob is, in a purist sense, "doing the right thing" by striving for better code, he is doing so at the cost of supportability.
Or, to put it another way:
  • You need to get rid of Rob as soon as possible.
  • You need Rob to educate and train your other developers.
Again, these two are not entirely mutually exclusive. One of my favorite examples of this sort of thing on a team is dependency injection. For whatever reason, most teams don't use any sort of dependency injection at all. Maybe the design doesn't call for it, but in many cases it really does. But what I've found is that most developers (many of whom have been developers for a very long time) simply don't understand it, beyond the buzz word itself.

So what happens when Rob takes the helm and re-factors everything to invert the dependencies? He may be doing a huge favor to the codebase, but if the rest of the team doesn't understand it then at what cost is that favor? Rob's being a cowboy, and he needs to slow down. He's doing something good, but he needs to do it in the right way. He needs to educate and train the rest of the team.

Of course, we haven't actually seen Rob's code at this point. He might be trying to do the right thing, but he might not be doing it right at all. The best case scenario in this situation is when he educates the team on what he's trying to accomplish and how he's going about it, and another team member chimes in with, "Ohhh, now I see what you're doing... But wouldn't it be better if you did it this other way instead?" Ding ding ding, now we have teamwork and collaboration. (It's nice to dream, isn't it?)

But we are rarely on that team. More often, at least in my experience, we're on the team where Rob is by definition right for no other reason than because he's Rob. And this is a very bad place for a team to be because... What if Rob get hit by a bus tomorrow?

So, I thought of an interesting way to test this for a team. (No, I'm not going to hit Rob with a bus. But you should be aware that I used to live across the street from an MBTA bus driver and I don't think I ever saw that man sober. So, you know, you take your chances.) How can we effectively simulate someone on the team getting hit by a bus?

Surprise vacations!

(Before I explain what I mean by that, let me first state that I recognize that this probably isn't a very good idea for a company policy. But it might just make for a very interesting and revealing company experiment.)

The idea is simple... Any team member who has vacation time available is permitted to take that vacation time any time and is not to notify the team until the day that vacation starts. Were you expecting Rob to come in to work today? Surprise! No Rob for you. So... What do you do now?

If your team dynamics are balanced, you're just somewhat shorthanded for the day. This is inconvenient, but you should be able to handle it. (It's not like you're the only business where this happens. Ever eat at a restaurant that was noticeably shorthanded that evening? Ever eat at one where you didn't notice? Which one had better management?)

If your team dynamics are not balanced, if Rob is truly indispensable for your company, then guess what? You failed, at least for now. But I have good news for you. Rob wasn't hit by a bus. He'll be back shortly. And in the meantime, you get to identify the specifics of where and why Rob is indispensable and come up with a plan to fix it.

Because you have a single point of failure in your system. And you'd better fix that.

Sunday, March 10, 2013

Details Are Important

There's a potential for a runtime error, so I should log it...
This is in a catch block, so I also have the additional context of an exception. So let's add another function argument to use the overload...
:::sigh:::