Wednesday, May 12, 2010

Agatha again

EDIT: I keep having to search for Agatha's project page: http://code.google.com/p/agatha-rrsl/

I was just trying to think of the importance for having both the RequestHandler (request objects get resolved to and handled by this) and the Processor (class that has the method that performs the action for a request... like Transaction Script pattern). The system we used to work with had these two ideas. The RequestHandler could act like an anti-corruption layer and sort of filtered out processor actions you did not expose to the outside world. Now assuming this is a more domain oriented system that represents just your core app and not any integration or externally available services, does this turn into more of a man in the middle code smell?

Could a custom piece in Agatha give you a simple anti-corruption layer? Maybe have a response factory piece that just knows how to package up an exception into a valid response object of the specific type you want. If you are using the Unit of Work pattern to wrap up each RequestHandler then you already have a place for this to go.

Now eliminating the Processors, does this mean that anytime something internally wants to call an action you have to go through Agatha again? I suppose yes if you want to reuse the transaction script for a domain action. Agatha has a way to run itself in an in-process configuration instead of a Client then a Server configuration. So you could just call right back into it and get any pre/post-RequestHandler code you had.

I'm undecided on this, so I thought I would write about it. I have to say that having the RequestHandler->Processor seems pretty smart, but the Processors ended up being small collections of actions. This resulted in, some of the time, many dependencies being needed by a Processor and not all of them needed for the specific action. That seemed like a bit of a waste and some clutter, but I would hate to have a single RequestHandler for each request (which could be 10's or 100's) then duplicate that structure again with Processors for each request just to separate out the dependencies needed to fulfill an action request.

Any thoughts?

EDIT AGAIN:
Had another thought. You can have a shared Request/Response library for all the externally available actions to the system, but then have a non-shared library with the internal Request/Response classes. If they all inherit from the same base type and are tied into your Agatha configuration (mostly just dependency injection configuration), then you could separate out externally available and internal only actions this way. I just thought of this so it may need to bake a little longer.

ANOTHER EDIT:
Since I seem to keep mentioning it I'll include a link to Domain Model. And why not, here is a link to DTO. I'll also mention that in regards to these patterns I don't consider myself a boy with a hammer (everything's a nail mentality when you learn something new). Just trying to explore tying Agatha into a system.

7 comments:

  1. The main thing I like about separating processors from request handlers is that the processors become their own discrete units of business logic. It basically allows you to break up your business functions any way you want without coupling them with your requests and responses. For example, say you have a request/response type to update information about a user. At some later point, the company integrates with a 3rd party service that also needs to be updated when a user changes. But you want to keep those two updates (local user, 3rd party user) separated as their own discrete functions (since they won't _always_ happen at the same time). The processor(s) would have the functions to make those updates, and the handler would be updated to call one or both based on the request it receives. That way you don't have to also make an UpdateOnlyLocalUser request and an UpdateOnlyRemoteUser request and an UpdateLocalAndRemoteUser request, etc.

    The structure I'm working with now is that there is the base request handler which gets all requests. It performs common functions (logging) and maintains response integrity (never return null, for example). Based on the request type, it routes to a handler for that request, similar to how it was done at our last job except that I have fewer, heavier requests. Rather than a GetUserById request and an UpdatePasswordForUser request and dozens upon dozens of others, I have just a User request which can, internally, be asking to do lots of things. The base handler routes this to the User handler which examines the request and calls the processor functions it needs to call. This means there is business logic in the handlers, they're not just blindly passing the request along to a processor.

    So far this design hasn't had to scale much in my setup, so I haven't found any problems with that yet. But I'd love to hear any thoughts on it.

    One thing that caught my attention in your post is the part at the end about cluttered dependencies. I can see what you're saying, the processors would have a lot of dependencies that aren't needed for any given request. There would be repositories and such that any given request to that processor _might_ need, but no single request will need all of them. Based on that, I wonder if there's a dependency injection model that supplies you with a dependency when it's used, not when the class that depends on it is instantiated?

    ReplyDelete
  2. Well if you wanted to include some context information to the resolver you could have it only include objects for the action at hand and perhaps produce nulls or NullObject implementations for the ones not needed. This seems a bit complex though and I think separating it out would be simpler.

    Hmm... thinking about what you said, it sounds like you'll have two Processor methods (one to update local, one to update remote) that take the same UpdateUserRequest object. The RequestHandler is responsible for checking the Request object to see if both or only one should be called. I suppose you could have multiple RequestHandlers that take the same Request object but start off with the checks to see if they should process it or not. When resolving RequestHandlers for a Request object, you could just do a ResolveAll() type call and loop over them. Each one could do it's own checks and actions as needed.

    I suppose instantiating multiple RequestHandlers (and all their dependencies) could be expensive but it shouldn't be any different then if you are injecting multiple Processors (with all their dependencies) into one RequestHandler.

    ReplyDelete
  3. In my current setup, the processor methods don't actually deal with the request and response objects. They take just the information they need and return just the information needed from them. So for updating user objects, they would take a user DTO (or maybe even just a username string, if the operation on the user itself is always the same and just needs to be performed on a specific user) and might not return anything, but would throw some kind of controlled exception on a failure.

    The UserRequestHandler receives a UserRequest which has flags set asking it to do something (IWantToUpdateLocalUser=true,IWantToUpdateRemoteUser=true) and the information needed to do that. The handler first ensures that all necessary data is present for all requested actions, then routes that data to each action in the processor and collects responses (or known errors/exceptions/etc.) to build the response object.

    The goal here was to keep the action logic as simple, lightweight and straightforward as possible. Also, to de-couple it from the requests and responses known to the exposing service, since you might want to have another service or another application of some kind sitting behind the service that can also access this logic without having to go through the service (even with in-process Agatha).

    So far the only thing I see that might cause issues in scaling this design is that requests and responses will change over time to include more actions and data associated with those actions. I'm not sure how well that would be handled by many client applications in an enterprise environment. One thing is certain, this design is not for a service that's exposed to clients outside of our control (such as client apps developed and maintained by remote customers).

    In the environment here at the bank, it's likely that the logical division of requests and responses would be by application. We have many, MANY disparate applications (most pretty small) that would be accessing this core service. So each one would have a handful of request/response types (many would likely have just one) that are their own. So those shouldn't be updated unless someone is working on that application and updating them. The goal of this design is to break up processor logic into logical domain areas (like at our last job) and to have requests/responses broken up between the many disparate applications throughout the enterprise (like at my current job). The handlers essentially translate from one logic partitioning scheme to the other.

    ReplyDelete
  4. Yeah the advantage that I really take away from all this is that you don't have to build up a request DTO object to trigger another action. If you are dealing with a core that has a domain model and DTO's that will result in extra steps.

    Now I'm thinking that the RequestHandlers should just handle the conversion from DTO to Domain Model and the reverse when the action returns. Although I'm still on the fence about just having RequestHandlers and always going through Agatha for domain actions/events.

    I still don't particularly like having processors though. It seems like the grouping could be done just as well with namespaces as with processors.

    The namespaces mean you'll have more classes and more objects to create if you needed to do several actions in an area but you are splitting up the dependencies for each domain action/event (possibly duplicating some depending on IoC configuration) and producing small interfaces.

    I feel like I'm almost getting to a functional style of composition when it comes to handling domain actions/events, but still taking advantage of object-orientation with the Domain Model and relationships.

    ReplyDelete
  5. Also I have been enjoying this discussing. Perhaps we can meet up again for lunch and draw some stuff out and discuss it some in person.

    ReplyDelete
  6. Something that kills my idea earlier of multiple request handlers for a request DTO is how do you know what response to return? So I think a single RequestHandler per Request is the way to go.

    Right before I left the last place we were doing more of a Domain Model thing and our service call implementations turned mostly into:
    - Check DTO for Id
    - Fetch or create model
    - Update model from DTO
    - Save model
    - Return new DTO

    It was a very data driven app I suppose with only certain areas with real business logic. So I suppose that flow makes sense. Then the DTOs we took in and returned were usually composite DTOs since we need to know info on children or system defaults or something along with the root/aggregate model's data. (We were starting to mix in some ideas from the building blocks of Domain Driven Design)

    ReplyDelete
  7. Some of the guys at work here are hitting the Maduro Room tonight after work, we can in-person discuss there if you're interested.

    Ya, it's really a matter of what your domain does when you come up with designs like this. As you said, at our last job it was highly data-driven. The actual amount of business logic was relatively small, save for a few cases. At my current job it's even further in that direction, almost exclusively a data reporting engine. The software itself is kind of secondary to the data here, so the goal of the design is purely ease of maintenance so the business can focus resources where it needs to.

    ReplyDelete