I'm personally a fan of the pattern and think that the (although somewhat annoying but also minor) overhead of maintaining interfaces is worth the cost of having a consistent scheme of how things get plugged together. Also I don't think it's too convoluted to follow now that I have experience with the pattern and workflow.
Anyway what do you guys think about the practice? What do you find are the pros and cons? Do you have any run ins with people new to it? How do they feel about it? etc..etc
Oh also, any interesting links discussing the practice would be cool.
ReplyDeleteI haven't done as much research on the topic as I would like, but I sometimes find myself using DI subconsciously in one form or another. I want to take the next step to the conscious level though. I see a lot of people referencing Martin Fowler's piece, but you've probably already heard of it. Also, codebetter probably has a few good articles on the topic.
ReplyDeleteIt arguably should have its place in every developers arsenal, but I too have heard the common complaint that it makes code too complicated. I specifically remember a case of a company that I worked for where they constantly talked about one of the former employees and how he made everything over-complicated, probably because he used strategies like DI, perhaps excessively...
Do you have any articles to recommend? I also want to check up on some frameworks to manage DI, since I see that in passing a lot.
At my last job I was that guy who professed that the architect is over-complicating things, specifically wanting to know a solid reason why we need to maintain interfaces when each one has only one implementation and it's _very_ unlikely that it will change.
ReplyDeleteAt my current job, in a more diverse and complex application ecosystem, I got my answer. And it's kind of funny, because the architect at my last job used to work at my current job. Maybe he was preparing for some familiar complexity that was unlikely to come? Or maybe he has preached the good word enough and his proper architectural vision is taking shape? I like to think it's the latter.
In any event, what I'm looking at now is a handful of "domain core" libraries which, in many cases, share certain repositories. They all connect to the core database in some way, several of them connect to other common databases, they all connect to the file system (common network shares), email, reporting services, etc. Naturally, I want to implement these repositories only once.
So each domain core maintains its own interfaces, and the repositories just implement those interfaces and get injected into the domain cores. There's a solid benefit to this pattern that I didn't see in the much simpler environment of my last job. For example:
System A needs to interact with the core database, but it should only ever read from said database. System A should never be able to write to the core database. System B, however, does need to write to it. The repository for this database implements both, but the interface in A only has Get() methods whereas the interface in B has a lot more. Because the projects are set up in Visual Studio such that the repository references A and B, not the other way around, anybody coding in a small part of A just doesn't see or have access to the methods in the repository that they shouldn't be using.
Sure, if the developer really wanted to they could mess things up, but that's true of any developer in any environment. But the idea is that, when they're coding and they use intellisense to find a method on the repository, they can only see what's in that interface. They can't _accidentally_ access other methods in the repository, they'd have to make a conscious effort to find that method in a whole other project and add it to the interface they're using.
It took me some time to get used to how the projects were organized in Visual Studio, actually. I was very accustomed to the classic UI -> BLL -> DAL approach. Had I designed that setup at our previous job back then, the domain core would be referencing the repositories. I'm glad I've learned better since then, in no small part due to the architects back there.
Now, in a relatively short time, it's just become second nature at this point. Even though my unit testing is still very light, I generally prefer to code against interfaces and keep my dependencies out of each other's way. In many cases it may be over-complicating things. But it over-complicates them in an understandable and supportable way, which is to say that it doesn't _really_ over-complicate them at all.
Dependency Injection sounds like a fancy name for the Strategy Pattern from the Design Patterns world (http://www.dofactory.com/Patterns/PatternStrategy.aspx). Or maybe i'm just not super-familiar with the term.
ReplyDeleteAt any rate, I would maintain that it only seems really over-complicated to those who do not understand the power and usefulness of interfaces. I know that sounds super-elitist and conceited. I promise I'm not like that in real life.
To me, it seems like DI and the Strategy pattern are sort of your 'textbook' cases for Interfaces. You need common methods/properties to program against, but the implementation (and anything that particular implementation depends on) needs to vary given the context you are executing in.
That's my two cents worth...
Personally, I always write using Dependency Injection. Once you get used to the practice, it actually reduces the overall development time.
ReplyDeleteAs most people seem to get confused on what exactly Dependency Injection is, I'd really recommend looking further into the principle. It definitely is not just the Strategy Pattern renamed, and it's much more than just design by contract. At it's root, DI is the migration of the instantiation of the implementation's dependencies to be external from the executing class. (Check out the links at the bottom for a more in depth explanation.)
As far as complexity of the application, the complexity exists in one form or another. With DI, it will be more in the configuration of the resolving container (which nicely encapsulates that logic in one place in your application). Without DI, that contextual complexity is defused out among the concrete implementation of your objects.
That code-based complexity is something in which David is very familiar. He can verify the 3000 line methods with upwards of 50 nested IF statements and 5 SWITCH statements from his last job. Each of those having upwards of 1000 different possible code paths. Simplifying those without DI would just spread the complexity across multiple objects that are still tightly coupled together, which really isn't a better situation.
With DI, that complex mess can be decoupled in such a way as to be reduced down to a set of single linear code processes. Each process can be isolated and tested individually. In one actual real world case, 1000 code paths became 14 isolated testable simple operations.
The need for your development group to use Dependency Injection is directly proportional to the amount of time you spend refactoring your code base.
If you deliver code that stays 95% static and untouched until it is rewritten in new technology five years down the road, then it's not really absolutely necessary to write using Inversion of Control. It would still be beneficial for that 5% you do have to change. Additionally, it would make the QA process a bit easier on the initial development, but that's still a cost/time analysis for overall benefit to the project.
Now if you find yourself in a more enterprise-oriented market, somewhere you only get to spend 3-4 months creating an application in which you will spend the next 4-6 years constantly modifying and adding functionality, you might want to write the application with a bit more focus on reducing the effort required to modify it.
This is one of those subjects I could go on and on about, but luckily, a lot of really brilliant developers already have. So here's a few links to get you started. I'd recommend digging through more of Ayende's posts on the subject. I find him very enlightening.
http://en.wikipedia.org/wiki/Dependency_injection
http://martinfowler.com/articles/injection.html
http://en.wikipedia.org/wiki/Solid_(object-oriented_design)
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
I definitely agree that maintainability is key here. Even if one doesn't expect that the code will ever change, it still makes sense (from my perspective) to design it from the beginning to allow that change. It may take a little initial effort to get used to writing code that way (as it did for me), but you'll make yourself happier in the long run when you go back to support it. (Remember Eagleson's Law: "Any code of your own that you haven't looked at for six or more months might as well have been written by someone else.") And it's also very much worth the effort if for no other reason than to keep one's self from becoming lazy or falling into a comfort zone.
ReplyDeleteStrategy Pattern, Dependency Injection, etc. For me (and Bink may correct me or tell me to go read those links, which I will) it all fits nicely under the category of separation of concerns. Rather than building terribly complex things, build many simple things. There can still be complexity in there, and as I mentioned in a previous post you can still have very long functions that address very long business processes. Just so long as it's all "one thing" and that it properly encapsulates one piece of functionality. Do one thing and do it well. (There's the old UNIX guy in me talking again.)
I'm sure there's a lot more to dependency injection and inversion of control than I use it for on a daily basis. But, for me, it just boils down to that... separation of concerns. Each module should require its caller to give it what it needs, rather than try to sort it out itself.
Do you guys feel like you lose any time due to maintaining interfaces or not being able to use "Edit and Continue"?
ReplyDeleteI wish you guys could come and work where i work and help educate people on thinking in terms of OOP, interfaces, patterns, etc. Here it seems that if you put something in a new assembly, you're object oriented... (no lie). It's refreshing to hear people talk intelligently about DI, strategy pattern, seperation on concerns...
ReplyDeleteIf you guys have any open positions, I'd love to take a trip to Nashville for an interview :)
ReplyDeleteI'm just glad that he post generated some comments. I was curious on what people's views were and their experiences. I might have to do another on ORMs and doing object models.
ReplyDeleteBink: Thanks for that Ayende post. That was a pretty good one.
Sorry for the late response. Apparently you can't paste into a textarea on the ipad and it took me a while to get to a computer where i could access this site. I am spoiled on the ipad and work internet access sux. Anyway..
ReplyDeleteI completely agree. Once you get used to DI it's tough to go back. At first, I was interested in the pattern in order to simplify my testing efforts. Which honestly just didn't happen, testing was still a pain - just because I could test easily didn't mean i could test well, which is still true, unfortunately. After a while of using DI, the benefits of separation of concerns comes into play, which I am glad to see is becoming more comfortable to you guys here.
At this point, I see DI as the gateway drug into object oriented design. Once you start to see what you gain from such a simple pattern, you can't help but see what else is out there. I would highly recommend the classic gang of 4 and looking around at martin fowler and uncle bob for inspiration. Really, DI is just the beginning.
One of the things you mention is the overhead of maintaining interfaces in your code. At it's core, the interface has nothing to do with the pattern. You can inject concrete classes or factories just as easily as an interface. You see interfaces being used in DI scenarios/examples as to comply with the Liskov substitution principle and the interface segregation principle. As far as losing time in maintaining the interfaces, that's what resharper is for.
"just because I could test easily didn't mean i could test well"
ReplyDeleteI'm curious about that, actually. I haven't really had the drive to develop proper unit/integration testing practices and this puts me in the same position with my tests. Just because I can doesn't mean I should. I see "no tests" as being better than "bad tests" because the latter can give a false sense of stability.
Sounds like something that would be interesting to explore...
Heh, maybe another discussion post.
ReplyDeleteI'm finding that when I write unit tests I'm wishing I had a more sophisticated means to explaining the test in code. I'm just using NUnit right now and I haven't full explored what I can do, but it feels too clumsy sometimes. This makes me interested in things like RSpec since that seemed to become popular.
When I write tests before I code I feel like I get a lot more out of the exercise. So I think I'm starting to get the appeal of TDD. You get sort of two passes at your interface design and you document what you want it to do before you get into the details of the implementation. Unfortunately my discipline for testing still isn't very high and I haven't written very many now that the project is in full development swing. In the bigging I wrote a fair amount (I have about 130 tests right now).
I'm think I'm getting better at it though and starting to understand another good result of TDD. You generally try to make small testable steps. This small piece of code is much easier to write, test, and the debugging is almost nil when you get into a bit of a flow since the feedback of the change is so fast. Again my discipline to keep up with it isn't very good.
Another interesting thing is that a beta feature I believe for IronPython 2.7 is to allow you to add attributes to your python classes/objects that would allow them to impersonate CLR types. This sounds awesome since you can apply all this stuff on the fly and pass in a DLR object as a CLR interface or something. I would imagine this could make writing tests and doing mocking more flexible.
I should also add that I have read that TDD is easier and flows better in dynamic languages since there is a bit less of the ritual with static languages. Also since they can be interpreted fairly quickly a save to the file could mean instantly running the tests.
ReplyDeleteThis video by Hashrocket is what pops into my mind. He has a three columns on the screen: code, tests, test runner output. I believe he gives an example where he just modifies something and saves then sees the result in the last column. Pretty cool.
http://vimeo.com/2987044
Nah. TDD is also too complicated ;)
ReplyDeleteSeriously though, I 100% agree with your assessment of TDD, there are numerous benefits. Also, static/dynamic does make a difference in my experience, I'm dealing with both worlds as I write this.
In a more general sense, I think its crucial when it comes to testing to minimize friction of any kind. For example, I have joined a project that does a significant amount of testing as automated user acceptance testing rather than traditional unit tests. (Spare your hate, it wasn't my choice to make.) The friction primarily comes from the difficulty in composing tests (since they involve abstracting and interacting with the user interface) and actually running the tests (they don't exactly run in a matter of milliseconds). In the django world, I like the simplicity of embedding tests in docstrings for simple procedures (which also encourages small testable code). Something similar could easily be done for JavaScript, if it hasn't already. This approach may also facilitate testing first by eliminating the disconnect between code and test that adds just a bit more friction.
Maybe as a community we could do a better job of educating one another without being arrogant and condescending?