I can't help but notice that the frequency of C# (or .NET in general) questions on Stack Overflow is dwindling. And the questions which are asked tend to be asking how to use one of Microsoft's pluggable widgets to do some specific task, usually exposing entry level code with no understanding of what the code is doing short of what the widget's instructions said to do.
It's a little unsettling, to tell the truth. And it makes me wonder if the C# enterprise developer's days are numbered. Java still seems to be going strong in the enterprise, so it's not much of a jump to that. But to be honest I really do love .NET and where it's gone in the past few years, but it seems like Microsoft is steering it back to the old days and enterprise developers just aren't going to follow it there.
I've been thinking about this for a while, actually. My own personal view on the timeline of Microsoft development may be skewed by my own experiences, both positive and negative, but this is just how I see it. (Consider that a disclaimer, and if you have any insight contrary to my following assessment then please present it.)
Back in the day (pre-.NET), "Microsoft development" generally consisted of VB programmers, fluent in and ins and outs of COM and ActiveX and other such things, rapidly developing applications. Whether it was in Access, ASP, or the more proper flavors of VB (do you remember VB5? blech). This wasn't really "enterprise development" so much as it was rapid application development performed sometimes in enterprise environments (or start-ups, or other small environments, etc.).
"Enterprise development" was going strong, but didn't get a lot of press. C++ developers are a reclusive breed, and they don't tend to hang certifications on their cubicle walls to validate their skills. Their software spoke for itself, and the enterprises in which they worked listened. (Sure, that's painting a general picture. Bear with me, this is just an allegory.)
Along came .NET, and things began to change. Microsoft was ambitious with this project, and rightly so. They wanted to "unify" things, in no small part because Java was already doing that. Java was at the time nowhere near the enterprise language that it is now (remember "applets"? do they still have those?), but it was a clear and present threat to Microsoft's hold on the developer market. After all, Microsoft sells the widgets, the tools that use those widgets, and the classes that teach those tools. They wanted to protect that.
I remember the early days of .NET when C# was the cool new kid on the block, and I remember hordes of VB6 developers up in arms demanding that VB remain strong in the emerging .NET ecosystem. That sort of made me wonder at the time... What kind of developer ties himself to a specific language? Back then my background was almost purely academic (university), and different languages meant little to me beyond the syntactic expression of the core concepts. VB, C#, it made no difference to me. But the framework, now that was cool.
Continue forward through the life of .NET. Microsoft's own people loved it, and treated it as best they could. 1.0, 1.1, 2.0, 3.0, 3.5... each new version presented wondrous new ways to apply design principles. Though, to be honest, I really didn't like the explosion of web controls and providers in 2.0, and I guess that can be seen as a foreshadow for what I'm discussing here. The web controls and providers were "pluggable widgets" where Microsoft was continuing its previous ways of trying to remove the developer from the equation. Remember their business model all along. They're not interested in helping you make better software, they're interested in helping your manager make the software himself. Or have his secretary do it.
But .NET seemed to be changing this. Enterprises were using it as a serious development platform, and it accommodated them well. 3.0 and 3.5 really moved it forward, and 4.0 continues to push it. New forms of expressiveness, easier ways to apply known design patterns and build robust, maintainable code. Sure, there was still room for the rapid application developers, the widgets still existed. But the environment was much more than that.
And open source projects like Mono get into the game and have made their own enhancements and improvements. Overall the community contributions have grown over the years, and some cool stuff has come out of it. IronPython and IronRuby just to name a couple.
But what's going on now? Have you noticed that some of Microsoft's own people have been jumping ship? IronRuby has been pretty much abandoned by its overlords, though I'm still holding out hope that the open source community will pick up that slack. (This might be something for the Code Dojo to look into, actually.) Microsoft, and by extension Visual Studio (which is a damn good IDE, for all your IDE haters out there), have recently been moving back to the pluggable widget model.
Why? I guess it's their business model, and that's their right. But the effect I'm beginning to see is that they're being taken less seriously in enterprise development. Sure, there are plenty of enterprises who have bought in to Microsoft's widget mentality, and don't see a difference between rapid applications and enterprise development. But there is real development going on out there, and it's being done by real developers. Microsoft's own people know this, and some of them are sticking with it despite Microsoft's turn in the other direction.
To visualize this, I tend to think of enterprise development and Microsoft development as two parallel lines. Those lines have converged more and more throughout the life of .NET. But it appears that they're once again pulling away from one another. And there is no shortage of developers from the Microsoft line who have jumped over to the enterprise line.
It occurs to me that developers who are enjoying the enterprise line would do well to rise above the Microsoft stack and become more tool-agnostic in their development. Otherwise, we may easily find ourselves in the same boat as the VB programmers who were called to arms ten years ago to protect their invested skill sets.
Friday, September 17, 2010
Friday, September 3, 2010
Windows Wizardistrating
Today I'm getting my first real taste of administrating a Windows server and I have to say WTF?
There must be a better way to go about changing and adding configuration settings in Windows. What I feel like is that I'm naturally lead to menus and GUIs off of the Admin Tools or Manager windows. Sometimes this leads me through little wizards (inspiration for the title of this post) that don't seem to do anything except split up a few real options across several mouse clicks. I must just be assuming that this is the natural way to do these things in Windows from prior experiences, because I do find that some of this can be managed by command line. Of course even those commands don't seem very useful sometimes.
I think a lot of this frustration comes from the fact that I have more experience administrating a Linux box then a Windows one. In Linux you are naturally led to the command line. If you need to get a little dirtier (which I also find is easier sometimes) you find out what daemon (service for you Windows people) controls what you want and find it's configuration file located in a fairly consistent why under /etc directory. I should add that these configuration files are generally plain text and in a human editable and readable format (imagine that a configuration files meant for us). The mature ones (most are) will have excellent documentation on how to work with these files, whose formats are generally consistent as well.
There are some interesting side effects from the Linux way of server administration. You don't necessarily need a GUI at all, all tasks can be automated, finding configuration files is easy, and backing up or restoring a server's configuration is extremely easy. I have even seen using a source control tool such as Bazaar to keep your server's configuration change history for easy reverting or tracking how the environment has been changed. In Windows I have no idea where most of these configuration settings are being stored nor if it is easily backed up or rolled back without a system backup/restore.
Another thing that bugs me is that there seems to be no easy way of searching and scanning the Windows Event Logs. The searching/filtering abilities in that GUI are pretty lame and it turns out that the log isn't stored in a series of simple plain text files. It's a bit irritating that I can't just do a multi-file grep to find the lines I'm interested in, awk them into a different format, grep for the awk output in another log, and see how different events in the system are linked together.
I suppose I need to just spend more time with Windows and Powershell so I can get past in mismatch between how I expect things to work (Linux) and how they actually work.
Anyway I just wanted to vent a bit of frustration.
There must be a better way to go about changing and adding configuration settings in Windows. What I feel like is that I'm naturally lead to menus and GUIs off of the Admin Tools or Manager windows. Sometimes this leads me through little wizards (inspiration for the title of this post) that don't seem to do anything except split up a few real options across several mouse clicks. I must just be assuming that this is the natural way to do these things in Windows from prior experiences, because I do find that some of this can be managed by command line. Of course even those commands don't seem very useful sometimes.
I think a lot of this frustration comes from the fact that I have more experience administrating a Linux box then a Windows one. In Linux you are naturally led to the command line. If you need to get a little dirtier (which I also find is easier sometimes) you find out what daemon (service for you Windows people) controls what you want and find it's configuration file located in a fairly consistent why under /etc directory. I should add that these configuration files are generally plain text and in a human editable and readable format (imagine that a configuration files meant for us). The mature ones (most are) will have excellent documentation on how to work with these files, whose formats are generally consistent as well.
There are some interesting side effects from the Linux way of server administration. You don't necessarily need a GUI at all, all tasks can be automated, finding configuration files is easy, and backing up or restoring a server's configuration is extremely easy. I have even seen using a source control tool such as Bazaar to keep your server's configuration change history for easy reverting or tracking how the environment has been changed. In Windows I have no idea where most of these configuration settings are being stored nor if it is easily backed up or rolled back without a system backup/restore.
Another thing that bugs me is that there seems to be no easy way of searching and scanning the Windows Event Logs. The searching/filtering abilities in that GUI are pretty lame and it turns out that the log isn't stored in a series of simple plain text files. It's a bit irritating that I can't just do a multi-file grep to find the lines I'm interested in, awk them into a different format, grep for the awk output in another log, and see how different events in the system are linked together.
I suppose I need to just spend more time with Windows and Powershell so I can get past in mismatch between how I expect things to work (Linux) and how they actually work.
Anyway I just wanted to vent a bit of frustration.
Wednesday, September 1, 2010
ORMs for all?
I believe we have done a discussion post on IoC and Repositories. For IoC it sounded like we were all in agreement that once you get into the pattern it's hard to not use it, the complexity fades away, and the extra upkeep saves you code in other ways. For Repositories I believe hit on trying to properly use the term and trying to restrict their abilities. Now I'm thinking we should do a post to discuss the use of Object-Relational Mappers or ORMs.
I personally am finding the use of an ORM almost always worth it. Your code takes a completely different shape and a lot of worries you had to deal with yourself almost vanish and are replaced with much easier to maintain ORM configuration code. That is once you learn the ORM.
The majority of my experience is with NHibernate and I am still learning this beast, but it has some extreme power at your figure tips. You sometimes almost feel like an evil conjurer summoning the magic of powerful dark wizards (this is natural because you are). The problem is of course that NHibernate adds it's own sort of spells and rituals to the mix that you have to learn. These spells and rituals also can blowup in your face in a different manner then what you might be used to from ADO spell casting. So there is a learning curve in order to gain easy connection/transaction managment, object change tracking, query batching, non-magic string statically typed queries, sophisticated mapping maintained in a single location, lazy loading, and the ability to have a fuller more meaningful object model.
Now this is with NHibernate which is trying to support the abilities of mapping a fuller object model that is completely independent of the database schema. If you are learning an ORM that went along with a more ActiveRecord pattern then it's probably much easier to use. Generally the easier it is to use though the less flexible so you could run into problems down the run, but you application may never need that. Going with straight ADO will probably be easier to write and get traction when you don't know an ORM, but now you are managing everything yourself. Even if the code is generated it's still code you have to maintain. Naturally your application's DAL and database may never change so managing all that manually may never come into effect.
There is of course the loss of performance and the added overhead to using an ORM. I really don't think this is a problem in our world. ORMs can help optimize database access, in all seriousness, but then you may be biting the bullet on CPU cycles and memory, but that's what we have the most of anyway. Of course there are system/architecture designs that take care of most bottlenecks.
You also have less transparency with an ORM over ADO and SQL. The ORM has to interpret your query and generate the SQL in order to take full advantage of its mappings. You can run into problems of debugging problems because you didn't realize the ORM was going to doing something. This goes back to the learning curve that can be involved for a team, but I rather dislike second guessing myself when I just want to rename or change the mapping for a property.
I think I have hit the majority of concerns with using an ORM and I'll restate my opinion here with a bit more explanation.
I personally find it hard to think of a project where I wouldn't be using an ORM of some kind. This does not mean that you get to avoid learning about databases and SQL. That complexity is there and constant, but it does mean that you get to avoid writing mapping code, casting the spells around ADO (or whatever framework library you have), have to write your own change tracking for dynamic updates, lazy loading, working with dummy objects and more procedural style code, and others.
That last one about procedural code could be a whole other post, but I think that when it's difficult to create and map a fuller object model, developers will create a dummy object model, and that leads to procedural code. I think most of us understand the side effects of medium+ amounts of procedural code (hard to maintain, easy to make mistakes).
So what do you guys think about ORMs and their use? Certainly we all want to be pragmatic, but, unless the project is very small, I find it hard to not think about using one. ORMs seem to becoming like an IoC container for me. This might make it hard for me to gel well with some teams though.
I personally am finding the use of an ORM almost always worth it. Your code takes a completely different shape and a lot of worries you had to deal with yourself almost vanish and are replaced with much easier to maintain ORM configuration code. That is once you learn the ORM.
The majority of my experience is with NHibernate and I am still learning this beast, but it has some extreme power at your figure tips. You sometimes almost feel like an evil conjurer summoning the magic of powerful dark wizards (this is natural because you are). The problem is of course that NHibernate adds it's own sort of spells and rituals to the mix that you have to learn. These spells and rituals also can blowup in your face in a different manner then what you might be used to from ADO spell casting. So there is a learning curve in order to gain easy connection/transaction managment, object change tracking, query batching, non-magic string statically typed queries, sophisticated mapping maintained in a single location, lazy loading, and the ability to have a fuller more meaningful object model.
Now this is with NHibernate which is trying to support the abilities of mapping a fuller object model that is completely independent of the database schema. If you are learning an ORM that went along with a more ActiveRecord pattern then it's probably much easier to use. Generally the easier it is to use though the less flexible so you could run into problems down the run, but you application may never need that. Going with straight ADO will probably be easier to write and get traction when you don't know an ORM, but now you are managing everything yourself. Even if the code is generated it's still code you have to maintain. Naturally your application's DAL and database may never change so managing all that manually may never come into effect.
There is of course the loss of performance and the added overhead to using an ORM. I really don't think this is a problem in our world. ORMs can help optimize database access, in all seriousness, but then you may be biting the bullet on CPU cycles and memory, but that's what we have the most of anyway. Of course there are system/architecture designs that take care of most bottlenecks.
You also have less transparency with an ORM over ADO and SQL. The ORM has to interpret your query and generate the SQL in order to take full advantage of its mappings. You can run into problems of debugging problems because you didn't realize the ORM was going to doing something. This goes back to the learning curve that can be involved for a team, but I rather dislike second guessing myself when I just want to rename or change the mapping for a property.
I think I have hit the majority of concerns with using an ORM and I'll restate my opinion here with a bit more explanation.
I personally find it hard to think of a project where I wouldn't be using an ORM of some kind. This does not mean that you get to avoid learning about databases and SQL. That complexity is there and constant, but it does mean that you get to avoid writing mapping code, casting the spells around ADO (or whatever framework library you have), have to write your own change tracking for dynamic updates, lazy loading, working with dummy objects and more procedural style code, and others.
That last one about procedural code could be a whole other post, but I think that when it's difficult to create and map a fuller object model, developers will create a dummy object model, and that leads to procedural code. I think most of us understand the side effects of medium+ amounts of procedural code (hard to maintain, easy to make mistakes).
So what do you guys think about ORMs and their use? Certainly we all want to be pragmatic, but, unless the project is very small, I find it hard to not think about using one. ORMs seem to becoming like an IoC container for me. This might make it hard for me to gel well with some teams though.
Subscribe to:
Posts (Atom)