Tuesday, November 30, 2010

Why I Don't Like Microsoft's Widgets

I previously pointed out that .NET questions on Stack Overflow seem to be degrading to nothing more than beginners asking about how to plug together Microsoft's widgets to produce a desired effect.  That tide seems to be subsiding, though it is still clearly present.  I guess it's just the nature of the beast these days.  The market is flooded with entry-level developers (many of whom are no longer considered entry-level because of their experience, whether or not they've actually learned from that experience) who use frameworks and tools and, ultimately, pluggable widgets.  I see Microsoft as the king of these widgets, hence my generalization.

But I was thinking today about the whole picture and wondering why I specifically don't like this.  Basically, I was helping a few people on Stack Overflow with some ASP .NET controls and had to put in some serious effort not to just rant about how they're going about the whole thing all wrong and that they shouldn't be afraid to write some code themselves instead of using Microsoft's controls.  (I particularly cringe whenever I see remnants from .NET 2.0 web controls when Microsoft went control-crazy and hooked up some provider frameworks to make web development as plug-and-play as possible.  People still use that stuff.  A lot.  And it really bothers me.)

It occurred to me what the root of my concern really is.  It's not entry-level developers, it's not frameworks and tools, it's not even me being some stodgy old know-it-all because in my day we used to program in bare text editors and used terminals and get off my lawn!  It's because these widgets address the wrong side of the equation.

Writing software is easy.
Supporting software is hard.

It's that simple, really.  (And reminds me of one of my favorite quotes... "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan)

Microsoft creates clever widgets which a novice can, with minimal training, plug together to produce a desired result.  This makes the step of writing software easier, but at the cost of supporting the software.  And that doesn't make the entire process any easier.  In fact, when you look at the whole picture of the software life cycle, it ends up being more costly in the long run.  Small changes to requirements or small additions to business logic that exist just outside the scope of the pluggable widgets require significant effort to implement.  Custom code, on the other hand, isn't difficult to change at all (if written well).

We're really reminded of this on an almost daily basis when developing in .NET.  Think of a .NET stack trace.  At the top is where the exception ended up (caught somewhere useful, I hope), and as you descend the trace it goes through your code so you can see where it was thrown and the path it took down the stack.

Many times, however, by the time you get to the bottom of the trace you are in Microsoft's code.  This isn't really a bad thing, after all it is all running on top of Microsoft's libraries.  So this is bound to happen.  And the more experience we have on the subject the more we can identify what's wrong by looking at Microsoft's part of the trace as well.  But the boundary is still clear.  Once you cross over from custom code to System.* then you officially enter the black box.  There are some things you can do, but it's just not the same as code you can actually step through.

How do you step through a DataGrid entirely defined in HTML controls and bound to a data source also defined in HTML controls?  You didn't actually write any code.  Do you even know what it's doing?  Maybe you do, maybe you don't.  Maybe the next guy who has to support it does, maybe he doesn't.  That's a problem.  But when something breaks, what do you do?  Your stack trace begins with code you didn't write.  For an experienced developer this may not present a huge problem, albeit an annoying one, but for that novice who was hired because he demonstrated an ability to plug widgets together it could be a serious stopping point.

I'm not saying that we shouldn't use tools that are available to us.  But we need to know what those tools are doing.  You can't make a career out of "I press a button and this happens, so as long as people want this to happen they can pay me to press this button."  Don't let Microsoft do the thinking for you.  Writing the software is the easy part.

Thursday, November 11, 2010

This VIM is a bomb!

For our CodeDojo presentation meeting a month ago, I did a presentation about VIM. I didn't go into any details about basics of getting around or how to install it. Those are too intro. What I tried to talk about is the immediate annoyances of VIM and a couple of concepts to keep in mind during your adventures. I thought I could mention a couple of things from the presentation and talk about my profile I'm building.

I really can't go on without mentioning the power of normal mode in VIM. VIM's modes are actually quite the advantage when you get used to them, but you should keep in mind that you want to be in Normal mode. It truly is where one of VIM's strength comes out, and that is text surgery.

# Language of Text Editing #
### `vimsentence := [count]vimexpression`
### `vimexpression := vimcommand | <operator><motion>`
### `vimcommand := <action> | <motion>`

### _That is while in NORMAL mode_

# WTF? (examples) #

### Delete next 3 words: 3dw

### Paste above line: P

### Move cursor down 2 paragraphs: 2}

As you learn some of the different keys you can apply them in pretty interesting combinations to build up sort of sentences of what you want to do. It takes a little while to get efficient at this, but sort of a rhythm starts to occur as you dispatch lightning ninjas from your Zeus fingers....hmm.. anyway.

The other Zeus like feature of VIM is it's plug-in and profile abilities. It's absurd how much you can customize VIM itself, but then on top of that you get python, ruby, perl, vimscript, shell, etc scripting to enable even more. When a plug-in and a developer love each other they, the developer lets the plug-in in to play with the others. This is where profiles are born.

Now to a newbie, straight out of the box you get a confusing text editor where you'll spend the first 5 minutes wondering why you can't type in text and the next 5 figuring out how you exit without killing the process. After a couple of tutorials and some VIM'ing, you probably have a profile pieced together by snippets that sort of work and that you don't understand. Well that's where my profile is trying to come in.

Here's a snippet where I setup some stuff with folding blocks of text:

" When we fold or unfold a block of text determine the block
" delimiters via syntax. You can use 'za' to toggle a fold.
" There are several other commands as well.
" 'zM' to close all folds
" 'zR' to open all folds
    set foldmethod=syntax

" Lets map 'za' to spacebar while in NORMAL mode
    nnoremap <space> za<space>

" By default I like to see all the code, but the first time
" you try to fold something this will get toggled and all folding
" well be on.
    set nofoldenable

As you can see I'm trying to document and organize things. The whole idea is that someone who knows a little about VIM can use my profile to learn a hell of a lot more. The profile is also to include several plug-ins in an attempt to create a ready to go development environment for a few programming languages. Right now I have a fairly sophisticated setup for Python (refactoring, test running, error detection, auto-completion) and something very usable for Erlang (syntax, error detection, auto-completion, compile a single file). Now I also have some built-in plug-ins and a couple of other things for other languages but I just haven't spent a lot of time tracking down, configuring, and testing plug-ins for other languages besides Python and Erlang.

I'm welcome to any help or suggestions in tracking down more to flesh out a Mono/C#, Ruby, C, or just about anything else. Anyway anyone interesting in using VIM please go ahead and check out my profile.

https://github.com/copenhas/dotfiles

My presentation is also available but the slides are pretty skimpy. You also need to use showoff (ruby gem) to actually see the slides in full.

https://github.com/copenhas/presentations

Wednesday, November 10, 2010

Of Horses and Carts

As developers, we like to code.  We want to write code.  It's what we do.  So, naturally, when a project begins what we want to do is dive in and start writing code.  From a proper planning perspective, this is generally frowned upon.  And for good reason.  When you're just starting to plan and haven't flushed out the details and don't have a firm grasp on the actual requirements (not just the documented requirements that some business user wrote down) is precisely when you shouldn't be etching into stone the logic to be used in the software.

But this reality can easily be (and often is) misconstrued as a mandate to not write any code just yet.  This is a fallacy.  Writing code isn't the problem.  Writing code that's etched in stone is the problem.  And overlooking the actual problem by mandating against what is essentially a means to the problem very easily leads to not solving the problem, but instead just moving it somewhere else.  Somewhere sinister.  The data model.

We've been writing software for years, and we generally know how it goes.  Almost every developer still does this just out of habit.  First you build your database and model out your tables, then you write your code to sit on top of that.  Right?  That's how everyone has always done it, so it must be the way.

Sadly, and at the cost of untold man-hours, it is not the way.  But it's just such common practice that people continue to behave in this manner out of nothing more than habit.  It's what they know, it's how they think, and it's a tried and true approach that management understands so it's the safe route.  (Safe for the developer, not for the ongoing maintenance of the software.)

What is essentially happening here is that the early attempt at solidifying the requirements is being etched in stone in the database instead of in the code.  And raise your hand if you think that re-factoring a database later in the life cycle of the software is significantly more difficult than re-factoring the code.  That's what I thought.

It all comes back to my favorite of favorites... separation of concerns.  You may be using proper IoC, you may be putting in hard assembly or even service boundaries between your layers.  But you haven't flushed out all of those dependencies.  The overall structure, in every direction, still depends on its core.  And when you first begin designing the software you are essentially designing its core.  The choice is yours... Should the core be the data model or should the core be the domain model?

Let's go with the common approach, the data model.  You build your ER diagram, create your tables, map your keys, create your association tables for those pesky many-to-many relationships, etc.  You now have a core database upon which your software will sit.  Essentially, you now have this (pardon my crude diagrams):
Your layers are separated, and that's all well and good.  But notice a subtle dependency there.  The overall shape of your software is governed by its core.  There's no getting around this, not unless you do what will likely amount to more abstraction than you need in a highly de-coupled service architecture.  (Get ready for tons of DTOs and "class explosion" for that.)  Even if these are broken apart by assembly and dependency-injected and all that happy fun stuff, there's still the underlying fact that your software's core is its data model.  What happens if that data model ever needs to change, or if you need to move to a different data store entirely?  A lot of work happens, that's what.

Consider instead shifting your core a little bit.  Imagine for a moment breaking that cardinal rule that "thou shalt not code first" and actually begin the design by creating your domain models.  In code.  What about the database?  You can figure that out later.  Or, at least in my case, hopefully a trained data modeler can help you figure it out later.  (Developers like to think we're also data modelers, but most of us just aren't.  A lot of that comes from the fundamental differences in design and function between object-oriented thinking in code and relational thinking in an RDBMS.)  Now, you have this:
The structural dependency is still there, but the core has shifted.  Your data model was built to accommodate your domain model, instead of the other way around.  By this approach, the data persistence is simply an interface which interacts with the domain, no different than the UI or anything else that hooks into the central domain.  The idea here is to be able to re-factor things more easily, especially in the data model (where significant growth can lead to unforeseen performance problems and scaling issues not evident in the original design), without impacting the entire system.

Many times this boils down to a cultural problem, really.  Businesses have spent decades with the understanding that "the data is paramount."  While there is generally truth to this statement, it should not be extended to believe that everything about the database is the core of your system and all that matters.  After all, the engine which drives that data plays a fairly critical role in your business.  Unless you're dealing with simple forms-over-data applications and simple rails-style interfaces, you would probably do well to consider the importance of all that business logic.

A common analogy in the English language is "putting the cart before the horse."  And you know how developers love analogies...  The cart is your data.  It's the payload that's being transported.  The horse is your engine.  It drives the data to and fro.  In the age-old struggle between cart-makers and horse-breeders there is a debate over which is the more important part of the system.  Without the horse, the cart doesn't move.  Without the cart, the horse has nothing to do.  Both are valid points to be sure, but when designing the system which natural construct ends up being the core?  No matter how well you abstract your horse-to-cart interface, there's still a natural architectural dependency in the system.  And it's a hell of a lot easier to build a cart that fits your horse than to breed a horse that fits your cart.

Thursday, November 4, 2010

Don't Forget To Track Your Hours

I was entering my hours worked into my employer's time tracking system today and it got me thinking about that whole process from a developer's perspective.  Now, it generally goes without saying that we as a breed don't like doing that.  We're here to work with code, not tell you how long we spent working with code.  But it occurred to me as I was entering my time that I didn't entirely mind doing it.  I didn't feel inconvenienced or annoyed at the prospect, and most of all didn't need to be reminded to do it.

We're intelligent people, we know why management wants and needs this information and how valuable it is to the company's bottom line.  But that knowledge alone isn't enough to capture this information and successfully report on it.  The process itself must be scrutinized and tailored to the actual daily needs of the employees.  Otherwise, you're going to spend extra effort trying to get your employees to enter their time, they're going to spend extra effort listening to you and finally entering it, and the numbers just aren't going to be good.

Let's take a look at some of the time tracking methods I've used over the years...

Many moons ago I worked for a small company in a small town that primarily set up computers and small networks for small businesses.  As the company grew, myself and eventually another developer were added to expand into small websites and custom applications.  This company had a home-grown (developed by the only other guy there who knew a little VB before I came on board, I think) time tracking system.  Basically, it was a little application with a list of "open projects" and functionality to clock in and clock out.

I never used it.  Well, I used it a couple times at first, but that quickly faded into not using it at all.  It was silly.  I (and the other developer when he joined the team) could not be bothered with tracking project time.  The application reported to a spreadsheet and, upon the manager's request for specific project times, I would just manually send a single project's time to the manager.  It was a rough guess.  How do I know how long I spent on that project?  I was doing 10 different things that day.

The system worked well for the network guys, because more often than not "clocking in" to a project was done before they actually went to the client site, and "clocking out" was done when they returned to the office.  Made sense.  But as for the developers, we saw it as pointless.  (As the company grew we also brought on board a PC tech guy and gave him a workroom to perform various machine maintenance.  I don't think he was even asked to use the system.  If he was, he promptly ignored the request.  I mean, when he has 5 open computers on his bench, 2 are formatting, 1 is installing something, one is booting up, and one he's actively using... what "project" is that under?  Is he expected to "clock out" and "clock in" each time he wheels his chair from one machine to another?  Didn't think so.)

Fast forward through some various other endeavors in my career to a more recent example.  Two jobs ago we used a system called Rally.  And, although this sentiment wasn't universally shared by every last one of my co-workers, I actually really liked it.  Sprint planning and task break down was a pain, and I'm fairly convinced there's no way to ease that.  But actual time tracking was quick, efficient, not inconvenient in the slightest, and actually a joy to do.

Logging work hours was really streamlined in this system.  I look at a grid of my tasks for the current spring, I mass-edit a few numbers of how many hours I spent per task that day, and I save.  Takes maybe 30 seconds of my time.  The UI was clean and intuitive.  And burndown charts are just pretty to look at, naturally providing incentive to take those 30 seconds.  Now, I'm sure the system could be horribly abused, and may have been for the co-workers who didn't enjoy it as much as I did.  (It didn't account much for distractions, so "putting in time" during a day where one's time was wasted by a dozen other people just isn't going to sit well.)  But, all in all, the numbers were good, up to date, and once we got used to the system we didn't need constant reminding and fighting from management to enter our time.

Step forward into another job, where we used a system called Remedy.  It was awful to say the least.  The system itself is highly configurable, so perhaps it can be made to be better.  And I'm certain we were on an old version, so maybe it's improved since then.  But, where the rubber hit the road, it was an absolute pain in the ass to use.  Entering information into the system or retrieving it from the system was akin to a scavenger hunt through Hell.  Needless to say, I patently refused to use it.  Early on in my time at that job there are a few entries that I was persuaded to put into the system, but for most of my stay there the reports had me flat-lining across the board.

There was absolutely no incentive to enter my hours.  The system was bulky and awkward and served no purpose to my daily work other than to explicitly get in my way and prevent actual work.  It may have been perceived that I felt that I was above such pettiness and wouldn't belittle myself to track my hours.  Giving some thought on the subject, I'd be lying if I denied such a sentiment.  It wasn't in any arrogant way, really.  It's just that, as a professional, it was a waste of my time and effort.

Most of the employees there had been there for quite some time.  Many had come up through other groups and other departments and perhaps this was all they'd known for a long time, for a good number of them perhaps even all they'd ever known.  They'd gotten used to it, I suppose.  It had beaten them.  "That's the way we do things here" was a common utterance.  (On a side note, I never understood how maintaining the status quo so staunchly made any sense in a company plunging into bankruptcy, but I digress.)  The bottom line was that I wasn't going to use it.  My time is valuable enough that I'm going to spend it doing the work I was hired to do.  If you don't think my work is valuable, then fire me.  (Heh, funny story about that, but I again digress.)

Step forward again to my current job.  Here we use a system called Jira.  I'm pretty happy about this, actually, because I'd wanted to use Jira for some time now and learn more about it.  Perhaps we may even begin using some of its companion products, which would be sweet.  Anyway, entering time is once again a clean, quick and simple process.  It's not quite as streamlined as Rally, so that one is still my favorite to date.  But it is quick and simple and the UI provides enough direct incentive to make it happen on a nearly daily basis.

Logging work still prompts for work descriptions and various other nonsense that I continue to leave blank.  Honestly, the description is in the task.  What did I do for those 3 hours?  I did what was already described.  Hopefully they won't grill us for more information and more tracking.  I honestly doubt they will, it's a pretty casual and extremely efficient place here.  If something holds up progress in any way, it's not going to last.  There is no "status quo" here other than getting the job done.

So, looking back, it would seem evident that it's not really in a developer's nature to avoid time tracking entirely.  It's not beneath us or a waste of our time, provided that it's done properly.  The time tracking system should be tailored to the work, not the other way around.  (And a fantastic example of that is the first example above, at least for the client-site network guys.)  If you're trying to fit the square peg of work into the round hole of the time tracking system your purchased, don't expect good numbers.  But if you, as a manager, take some time and actually understand how your employees think and work and act then you can get that useful business information from them without an ongoing battle simply by adjusting the system to accommodate the work.