Wednesday, May 26, 2010

Service Coordinator thing

So working on some new code we got the first chunk of functionality up and running yesterday but there was something in particular that bugged me.

Currently the code is organized into a Domain project which basically just has a couple of infrastructure/configuration classes and a service class (which are suppose to kind of handle a domain action for the consumer and not just a bunch of utility things). Then there is a Domain.Models that actually has our object model. Finally out of these core projects of importance there is a Domain.DAL that has all the NHibernate mapping code and our repositories.

The service class is the one that actually gets used and it turns around and uses the models and repositories to get the job done. Now to get some stuff working I ended up having to write this code:

try
{
  // ... check for and get the external input ...
  using (var unit = new UnitOfWork())
  {
    var service = new ServiceClass();
    service.Process(inputs);
    unit.Commit();
  }
}
catch (ApplicationException e)
{
  var logger = LoggerFactory.GetLogger(this.GetType());
  logger.Error(e);
  // ... any extra context based error logging ...
}

This seems like too much to ask of everyone who ever consumes a service. I can only assume that the number of services will grow along with the number of places consuming into the services. I have only written this code once and I didn't like where I put it.

The logger is there because I don't want my service classes to be cluttered with exception logging or unit of work management. I would like them to only worry about their own info/debug logging and be able to use each other and not worry about unit of works in progress and the like. So that pushes more responsibilities to the top.

After some thought I came up with this:

ServiceCoordinator.Run(
  s => s.ProcessMessage(deviceMessage)
);

Basically the ServiceCoordinator is a static class (I'm not normally a fan of static so this might change) that internally will take advantage of a IoC container and actually manage the process of resolving the service and wrapping it up in our generic unit of work and logging stuff. It has currently two functions on it:

void Run<TService>(Action<TService> expression)
TResult Execute<TService, TResult>(Func<TService, TResult> expression)

Internally those functions are implemented basically the same except one explicitly returns a value.

public static TResult Execute<TService, TResult>(Func<TService, TResult>
{
  try
  {
    var service = kernel.Get();
    using (var unit = new UnitOfWork())
    {
      var result = expression.Invoke(service);
      unit.Commit();
      return result;
    }
  }
  catch (Exception e)
  {
    var logger = LoggerFactory.GetLogger(typeof(TService));
    logger.Error(e);
    throw;
  }
}

Initial test and look at it and I'm liking this better. I wasn't originally using dependency injection although I'm familiar with it. The lack of DI was of course resulting in some hard to test scenarios so I think I may turn around and at least completely separate the logical layers using interfaces and DI. Get that hard dependency on UnitOfWork out of the ServiceCoordinator as well.

Tuesday, May 25, 2010

Separation Of Concerns - It's Not Just For Code

As developers, we're generally familiar with the idea of SoC. It's basic modular design, no matter how you describe it. Complex things are made of many simple things; Having many small pieces is better than having one large piece; Each segment of code should do one thing and only one thing and do it well; etc. Good developers keep this in mind when writing their code. (We all know that sometimes business requirements get in the way of good design practices, but you can't really blame that on the developer.)

But it seems all to common that organizations fail to maintain proper SoC when building their infrastructures. Maybe too many managers drink the Microsoft Kool-Aid, maybe one influential blow-hard forces his ignorant will on the group, the reasons go on and on. All too often, it seems that businesses look so favorably upon tools that "integrate" that they overlook the fact that these same tools often "interdepend." That is, when your infrastructure is all set up, you're suddenly locked in to a design that limits what you can actually do.

(Let me take a small break here to mention that I may not actually have much of a point here. It's entirely possible that I'm just teetering on the edge of a bitching-session and what I have to say isn't of particular value. But I'm going to continue anyway.)

I'll start with a purely hypothetical example. This example may not actually be literally plausible, as I'm not so intimately familiar with the tools it references and their integration options. But the concept being addressed is clear.

Suppose you are an employee of a given organization and you need to build a document to send to a client. For the sake of urgency, it's a big client. And this client is considering your organization as an option among several competitors. No pressure, though. So you fire up Microsoft Word and start composing. It sure is a great tool, isn't it? In fact, your organization loves the Microsoft Office family of products so much that they forked over all the cash they could for all the fully-integrated solutions Microsoft could offer them. It's so easy and convenient for you now and you can do it all from anywhere you need. Sounds good, right?

In this document you want to visually represent some ideas. So you fire up the integration with Microsoft Visio and put together some wonderful charts and graphics to clarify what you're saying. You use various other plugins and whatnot to decorate your document even more, making it as clear and professional as possible. All the while you're basking in the wonderful set of fully-integrated tools that allow you to do this. You finish your document and proudly send it off to the client.

They reply saying that they can't open the attachment. They asked for a Word document, and you sent them something that looks like a Word document, but when they try to open it they get errors about the version of Word and something about Visio, which they don't even have, and even when they get it to open it seems to be missing a lot of information.

Did you have to convert something when embedding it into your document? Did you have to save it in a special way? Well, yes and no. In that particular scenario, yes, you should have performed some extra step to "flatten" your document and save it all down in a more transportable format. But, more to the point, no, you should not have had to do anything extra. Your organization's fully-integrated solution failed. The client wanted a document, but what you ended up sending them was a document with external dependencies.

Don't feel bad, it happens to lots of companies. And unfortunately it ends up being a serious hindrance for a lot of developers working for those companies. And it leads to such gems as this:

My Manager: We use [insert tool here] because it integrates with Visual Studio, makes things a lot easier.
Me: Then I guess we're going to need [insert tool here] installed on the build server if we want any kind of continuous integration.
My Manager: No can do, the license would be too expensive.
Me: Well, if we switch to [other tool], which is free even for commercial use, then we can completely script it out and automate it. We're not really deeply vested in [insert tool here] yet anyway, so switching now wouldn't be difficult.
My Manager: No way, we already spent a lot of money on [insert tool here]. The decision was already made.
Me: Yes, but it's getting in the way. It... doesn't appear to be making things easier at all.
My Manager: But it integrates with Visual Studio, which is what we wanted.
And so on, you can see where this is generally going. You can plug in your values of choice... testing libraries, source control, etc. The problem is always the same: integration to the point of interdependency.

And, honestly, why the hell would you want everything to happen inside your IDE? This makes your IDE a very integral (and very heavy) part of your build and deploy process. Do you want to install Visual Studio on the production servers in order to get certain pieces to work? Of course not. Then de-couple that shit right now.

Source control is a file system operation. It tracks changes to files. That's it. It doesn't need to know that different files mean different things in different contexts. It doesn't care. Your project file is just an XML file. It's not some magical container that the source control system should use to determine what's been changed.

Updating, building, deploying, testing, etc. Everything you do should be a separate and discreet component. Your QA department shouldn't have to open Visual Studio to run their tests. Your production deployment people shouldn't have to open Visual Studio to friggin' copy files to a server.

This isn't to say you can't have tools that integration. You just need to know what you're getting into. Take, for example, the build server at my last job. It ran a product called TeamCity (you may recognize them as the makers of ReSharper), which integrated with Subversion for source control. So your build server can watch your code and automatically update and run builds accordingly. Makes things easier, right?

Yes, it does. Because TeamCity doesn't need Subversion, and Subversion doesn't need TeamCity. You can switch to a different continuous integration tool or a different source control with minimal effort. Does this require that developers need to step outside of the IDE (aka, their comfort zone) once in a while to perform tasks? You bet your ass it does. And if they can't do that then you need to seriously question the difference between skilled developers and cert-monkeys.

You pay a hell of a lot more for your developers than you do for the tools they use. Which of the two should be in charge of the other?

Wednesday, May 12, 2010

Agatha again

EDIT: I keep having to search for Agatha's project page: http://code.google.com/p/agatha-rrsl/

I was just trying to think of the importance for having both the RequestHandler (request objects get resolved to and handled by this) and the Processor (class that has the method that performs the action for a request... like Transaction Script pattern). The system we used to work with had these two ideas. The RequestHandler could act like an anti-corruption layer and sort of filtered out processor actions you did not expose to the outside world. Now assuming this is a more domain oriented system that represents just your core app and not any integration or externally available services, does this turn into more of a man in the middle code smell?

Could a custom piece in Agatha give you a simple anti-corruption layer? Maybe have a response factory piece that just knows how to package up an exception into a valid response object of the specific type you want. If you are using the Unit of Work pattern to wrap up each RequestHandler then you already have a place for this to go.

Now eliminating the Processors, does this mean that anytime something internally wants to call an action you have to go through Agatha again? I suppose yes if you want to reuse the transaction script for a domain action. Agatha has a way to run itself in an in-process configuration instead of a Client then a Server configuration. So you could just call right back into it and get any pre/post-RequestHandler code you had.

I'm undecided on this, so I thought I would write about it. I have to say that having the RequestHandler->Processor seems pretty smart, but the Processors ended up being small collections of actions. This resulted in, some of the time, many dependencies being needed by a Processor and not all of them needed for the specific action. That seemed like a bit of a waste and some clutter, but I would hate to have a single RequestHandler for each request (which could be 10's or 100's) then duplicate that structure again with Processors for each request just to separate out the dependencies needed to fulfill an action request.

Any thoughts?

EDIT AGAIN:
Had another thought. You can have a shared Request/Response library for all the externally available actions to the system, but then have a non-shared library with the internal Request/Response classes. If they all inherit from the same base type and are tied into your Agatha configuration (mostly just dependency injection configuration), then you could separate out externally available and internal only actions this way. I just thought of this so it may need to bake a little longer.

ANOTHER EDIT:
Since I seem to keep mentioning it I'll include a link to Domain Model. And why not, here is a link to DTO. I'll also mention that in regards to these patterns I don't consider myself a boy with a hammer (everything's a nail mentality when you learn something new). Just trying to explore tying Agatha into a system.

Thursday, May 6, 2010

Where's The Warez?

It's taken me a while to notice, but I've become aware of what was apparently a slow fundamental shift in my overall attitude towards the world of software. Over the years I have certainly presented a clear and noticeable burgeoning as a professional developer. Skill sets have improved, design techniques have been honed, and important lessons have been learned. But along the way, quite unintentionally and unnoticed, a side-effect of all of this has dragged along with it.

In my younger days I was not at all averse to "pirating" software. In the atmosphere of college it was just par for the course. Perhaps my mindset on the entire notion of digitally copying things is warped, after all I was there when a revolution was taking shape. (You should have seen what Shawn did to that network, man. It was epic.) So it wouldn't be entirely incorrect to say that there was a time when I flat out refused to pay for software.

It wasn't just a matter of convenience, either. Sure, the school had all the software you could imagine and it was easy to copy. So I had local copies of everything from Photoshop to Matlab. Most of the time I wouldn't even use it, I just wanted to add it to my collection. But there was more to it than that. This bred in me the idea that, with a little searching and a little work, I could get any software for which I had an immediate need for free.

But what happened to that? With the advent of high speed internet access and p2p software (not the least of which is the vast collection of torrents on the internet), one can argue that it's easier than ever. So why don't I pirate software anymore? Why is it that not only does it simply not occur to me, but even now as I think about it I find no desire to do it?

There was a time when, for example, if I wanted to use ReSharper in my development then I would, without a second thought, spend upwards of days searching for a "crack" or a "keygen" to allow myself the use of it. And now, finding that I miss the use of it from a corporate license at my previous job, I simply consider the weight of that against the cost of a personal license and budget accordingly. If I want it, it costs $x.xx, and that's it.

I don't even know when this change took place. Looking back at the commercial software I've used, I've properly bought everything I currently have. My only Windows machine is properly licensed, I've bought the past two OS X upgrades (10.5 and 10.6) and iLife software. I even buy apps on my iPhone from time to time.

So what changed? Is it my switch to Apple products and the mentality that traditionally espouses that? Is it my increased skill level with open source and homegrown solutions over the years? Or can all this simply be explained as a consequence of growing up and transitioning from a code monkey to an experienced developer?

When I think about this, I can't help but remember something someone once mentioned to me in passing back in college. I was working with the Systems Group at the College of Computer Science at Northeastern University, under the guidance of their Directory of Technology. We had a persistent collaborative environment (known as a chatroom to the unwashed masses) on a MOO and we were once discussing a task that needed to be done. (The actual task escapes me now.) A commercial piece of software offered a solution, but the question was whether we had the budget for it. Being young and uninitiated, I typed in something along the lines of: "that all depends on what you mean by 'own' :)" or something to that effect. After a few moments of silence, I received a whisper (or private message, if you will) from one of the team members (I think it was David, may have been Jay) saying "pirating software is generally frowned upon in the IT community."

Who would have thought that now, ten years later, those words would stir up in me a sense of remorse for my younger days? Looking back at what I could have learned from the professionals with whom I worked at the very beginning of my career, I can only wonder where I'd be today. Not that I'm complaining about my current lot in life by any means, I like where my career is going right now. But man... I was young and stupid.

Monday, May 3, 2010

Anybody Want To Work On An Open Source Project?

I had an idea this weekend for an open source project that might be interesting and fun to work on, and I'm wondering if anybody else is interested...

OpenTraffic - A smart controller system for traffic lights.

The idea is to create software to control a scalable network of nodes (traffic intersections) and control the state of each node for the goal of vehicle energy efficiency. Taking sensor input from each node, it should adjust light patterns accordingly to accommodate traffic flows and reduce stopping/starting of vehicles as much as possible to enable a city-wide reduction in energy consumed by vehicles.

Anybody interested? Have any thoughts on the project?

I figure it's an interesting challenge in a few ways. First, it has to operate under real-time constraints. That presents a whole new set of challenges in software design that isn't often found in most programming. Scalability will be an interesting design concern, creates nodes that can self-organize at any scale based on information programmed into the node (location and directions of other nodes).

The hardest part is getting started, so if you're interested let me know so we can discuss how we want to approach this.