Since that find-as-you-type thing was kind of fun, I decided to implement another little "nice to have" for the client in jQuery. This time it's a simple piece of functionality which obscures form elements and requires an extra explicit step from the user before being able to edit data. The requirement was that users need to confirm what they're doing to avoid accidental edits. Some people were suggesting that we put check-boxes next to elements that the user needs to click to confirm the edit, etc. But I thought of something that seemed a little more elegant.
The idea was simple. For each input element on the form we just want to hide the input and replace it with a text label which shows the data, but isn't an input element. Then we add a button or link that the user needs to click in order to make the form editable.
It also occurred to me that this didn't require any server-side code changes at all. None. It's just JavaScript and HTML manipulation, after all. The ASP.NET code shouldn't even know that anything's happening. It just needs to track its own controls and that's it.
Given that this was as simple to implement as the find-as-you-type thing, I figured I'd take some extra time and abstract this out into a proper jQuery plugin. It's only needed on one page for now, but it can easily be needed on other pages (and likely will be) in the client's site. So I might as well make it portable.
Thus, FormGuard was born. I know I'm just starting small, but man I love JavaScript. And the fact that I'm just starting small is even more exciting because it means there's so much more to do.
Wednesday, September 21, 2011
Find-As-You-Type
There's an enhancement request at my client's office to add a find-as-you-type search feature to one of their forms. They have two applications, one is an ASP.NET website and one is a WPF thick client, and they need this feature added to the corresponding form on both. I'll be developing the former and a colleague will be developing the latter.
(Side note: The former makes use of a single line of code in the UI from a jQuery plugin, whereas the latter will apparently require writing a custom control and a few days of work. Man, I love being a web developer.)
Naturally, this functionality is easily implemented on the web page with the jQuery UI Autocomplete plugin:
Behind the AJAX handler will be a call to the same back-end code that the thick client uses. In this case the handler will simply translate the returned models into strings to return to the AJAX call. Very simple. Since it took all of a few minutes to write this, I found myself with a lot of extra time. So I wondered what would be involved in actually writing this UI functionality myself (were I to live in a world where jQuery UI didn't exist, but at least for the sake of this exercise jQuery itself does).
Since this is a custom implementation for this specific page, and since I only had about an hour to work on it, it's not nearly as robust and universally pluggable/applicable as the jQuery UI version. But it was fun to write. I started with the div to hold the results:
And, of course, style it (to include making it invisible by default):
Next, we need to populate the results into the div when we type:
Works like a charm. Well, it doesn't have the delay functionality that the jQuery UI plugin has, and I'm not going to bother to add it in this exercise. But the concept would be simple. Create a value to hold the timer and on each keystroke reset it. Perform the server request when the timer elapses. I'm not 100% sure of the most elegant way to do that in JavaScript, but if I had more time I'm sure it wouldn't be difficult.
Anyway, now we need to make the list navigable. Let's start by defining a CSS class for the results and another one to indicate that something is currently selected:
I used a class for "hovering" instead of the CSS :hover pseudo-class because we're going to use the existence of this class on an element to indicate that it's currently selected, so we'll need it in a jQuery selector later. Now we need to manually set the hover effect on the search results:
As always, there's likely a more elegant way to do this. But it gets the job done. I've sort of doubled-up the logic of removing the hover effect just in case there's some weirdness with the mouse. The idea is that at any given time no more than one element (but potentially none) should have this effect.
We also need to make the elements clickable and have them set the text into the input:
By this point I still had a little bit of time left, so I figured I'd add keyboard navigation functionality. So when the focus is on the search text input, we need to handle the arrow keys. We'll do this by adding some more conditionals to the keyup event binding from earlier:
Finally, our end users are going to want to be able to select an element by hitting return when they're navigating with the arrows like this. (I would recommend tab instead, but there were two barriers to that. First, the users want to use return. They don't know of tab completion standards or anything like that. Second, tab keyboard events are kind of weird in JavaScript. I tried using tab for a while and it didn't quite work right between browsers. IE in particular, which is the client's browser of choice, was a little wonky with me hijacking the tab key.)
Well that was fun. Again, it's not nearly as mature a solution as the jQuery UI Autocomplete plugin. But, again, it gets the job done. A lot more polishing can be done here and it can be generalized quite a bit, but all in all not bad for an hour's work.
(Side note: The former makes use of a single line of code in the UI from a jQuery plugin, whereas the latter will apparently require writing a custom control and a few days of work. Man, I love being a web developer.)
Naturally, this functionality is easily implemented on the web page with the jQuery UI Autocomplete plugin:
$('#searchText').autocomplete({ source: 'AJAXHandler.ashx' });
Behind the AJAX handler will be a call to the same back-end code that the thick client uses. In this case the handler will simply translate the returned models into strings to return to the AJAX call. Very simple. Since it took all of a few minutes to write this, I found myself with a lot of extra time. So I wondered what would be involved in actually writing this UI functionality myself (were I to live in a world where jQuery UI didn't exist, but at least for the sake of this exercise jQuery itself does).
Since this is a custom implementation for this specific page, and since I only had about an hour to work on it, it's not nearly as robust and universally pluggable/applicable as the jQuery UI version. But it was fun to write. I started with the div to hold the results:
And, of course, style it (to include making it invisible by default):
#searchResults { background: white; border: 1px solid black; position: relative; top: 0px; left: 0px; display: none; text-align: left; float: right; }
Next, we need to populate the results into the div when we type:
$('#searchText').keyup(function(key) { if ($('#searchText').val() != '') { // The search string isn't empty, // so remove any current results and search again $('#searchResults').empty(); $.ajax({ cache: false, url: 'AJAXHandler.ashx?term=' + $('#searchText').val(), success: function(data) { // Build the HTML results from the response var options = ''; for (var i = 0; i < data.length; i++) { options += '' + data[i] + ''; } // Add the HTML results to the element and show the element $('#searchResults').html(options); $('#searchResults').show(); }, error: function(msg) { // Currently do nothing. The user can continue // to search by editing the text again. }, dataType: 'json' }); } // If the text is empty, clear and hide the element. else { $('#searchResults').empty(); $('#searchResults').hide(); } });
Works like a charm. Well, it doesn't have the delay functionality that the jQuery UI plugin has, and I'm not going to bother to add it in this exercise. But the concept would be simple. Create a value to hold the timer and on each keystroke reset it. Perform the server request when the timer elapses. I'm not 100% sure of the most elegant way to do that in JavaScript, but if I had more time I'm sure it wouldn't be difficult.
Anyway, now we need to make the list navigable. Let's start by defining a CSS class for the results and another one to indicate that something is currently selected:
.searchResult { display: block; cursor: pointer; width: 100%; } .searchResultHover { background: #FBEC88; }
I used a class for "hovering" instead of the CSS :hover pseudo-class because we're going to use the existence of this class on an element to indicate that it's currently selected, so we'll need it in a jQuery selector later. Now we need to manually set the hover effect on the search results:
$('.searchResult').live({ mouseenter: function() { // Remove the hover effect from all of the elements $('#searchResults').children().each(function() { $(this).removeClass('searchResultHover'); }); // Add the hover effect to this element $(this).addClass('searchResultHover'); }, mouseleave: function() { // Remove the hover effect from this element $(this).removeClass('searchResultHover'); } });
As always, there's likely a more elegant way to do this. But it gets the job done. I've sort of doubled-up the logic of removing the hover effect just in case there's some weirdness with the mouse. The idea is that at any given time no more than one element (but potentially none) should have this effect.
We also need to make the elements clickable and have them set the text into the input:
$('.searchResult').live('click', function() { $('#searchText').blur(); $('#searchText').val($(this).text()); $('#searchResults').empty(); $('#searchResults').hide(); });
By this point I still had a little bit of time left, so I figured I'd add keyboard navigation functionality. So when the focus is on the search text input, we need to handle the arrow keys. We'll do this by adding some more conditionals to the keyup event binding from earlier:
// If the user presses the down arrow, navigate down the current list. if (key.keyCode == 40) { var selection = $('.searchResultHover').first(); if (selection.length) { // A selected element was found. Move down one. selection.removeClass('searchResultHover'); selection.next().addClass('searchResultHover'); } else { // No selected element was found. Select the first one. $('#searchResults').children().first().addClass('searchResultHover'); } } // If the user presses the up arrow, navigate up the current list. else if (key.keyCode == 38) { var selection = $('.searchResultHover').first(); if (selection.length) { // A selected element was found. Move up one. selection.removeClass('searchResultHover'); selection.prev().addClass('searchResultHover'); } else { // No selected element was found. Select the first one. $('#searchResults').children().last().addClass('searchResultHover'); } }
Finally, our end users are going to want to be able to select an element by hitting return when they're navigating with the arrows like this. (I would recommend tab instead, but there were two barriers to that. First, the users want to use return. They don't know of tab completion standards or anything like that. Second, tab keyboard events are kind of weird in JavaScript. I tried using tab for a while and it didn't quite work right between browsers. IE in particular, which is the client's browser of choice, was a little wonky with me hijacking the tab key.)
$('#searchText').keypress(function(key) { // If the user presses the return key, "click" the current list element. // This is happening in keypress instead of keyup for a number of reasons. // Not the lease of which is because I was trying to use tab instead, // which throws a keyup event when tabbing into the field. if (key.keyCode == 13) { $('.searchResult').first().click(); // Explicitly set the focus, just in case $('#searchText').focus(); // Return false so we don't accidentally submit a form or anything. return false; } });
Well that was fun. Again, it's not nearly as mature a solution as the jQuery UI Autocomplete plugin. But, again, it gets the job done. A lot more polishing can be done here and it can be generalized quite a bit, but all in all not bad for an hour's work.
Wednesday, September 14, 2011
Test Everything. Every Time.
My client recently asked me an interesting question. We're currently in the process of doing various bug fixes and taking care of a lot of "low hanging fruit" while the business prepares for the larger projects coming our way. So we're about to push out a maintenance release which is about 95% composed of support request tickets. And the client asked me, "Which parts of the system should we test?"
They wrapped it up in a lot more jargon than that, tossing around terms like Risk-based Testing and the like. And they spent some time explaining to me why this is important. I get it, I see what they're saying. But that doesn't change my answer to the actual question.
Now we've touched upon the source of the problem. There are two key sentence fragments here which jump out at me and which identify the root cause of what's really wrong with the system, regardless of what changes or bug fixes we make:
Nobody here made this decision, and those who were involved in the past probably didn't even know they were making this decision. But it was made nonetheless. Testing is a slow manual process because they, as a company (not any one particular individual, I hope), decided that it should be a slow and manual process. (There's an old saying, "Never attribute to malice what can be explained by incompetence." I'm honestly not sure if either of those root causes apply here. I wasn't around when the system was developed, so I don't know what the story really was. But in this particular case, the root cause is irrelevant. The net result is the same.)
This decision that the company made doesn't change my recommendation. It doesn't change what my career has taught me to be a best practice. It doesn't change the fact that one should fully test everything one does in one's software. The only thing it changes is the cost of that testing to the business. And cost isn't my department. I'm just telling them what they should do, the fact that they chose (actively or passively) to do it in a prohibitively expensive way is another matter entirely.
In the past, had they sought my advice (or that of any consultant from my employer), the answer would have been the same. But in the past we may have been able to steer the design of the software to allow for more cost-effective testing. We'd have been happy to provide it. But in the past a decision was made by the business not to seek the advice of industry professionals. I can't change the past. But I won't let this one company's past change my mind about recommendations and best practices. I'm there as a consultant to bring these practices to the business. Not to change my practices to fit decisions the business made about software in my absence. My advice still stands.
Then there was that second troubling statement, whereby the tester isn't familiar with the system. That one frightens me even more, honestly. I can get the fact that one doesn't have automated tests. I can get the fact that QA and QC aren't in the budget. It's not what I recommend, but it's something I can at least understand. But not even knowing what one's software does? How can one even begin to justify that?
Isn't it all documented somewhere? Aren't there training materials for the users? Requirements for the software? Business designs? Technical designs? Even just an intuitive interface that purports to do what the business actually does?
This goes back to something I've been recommending since the day I got there. You need to model your domain. We can argue all day about what that means in the code and how to design the applications to make use of this information. But for the business this concept is of paramount importance. If you want your software to do what it needs to do, you need to define what it needs to do. Anything which deviates from that definition is a defect in the software. That definition is the specification for the software. It's the training manual. It's a description of what the business does. You should know what your business does.
If the tester doesn't know what the software is supposed to be doing, who does? Is there even agreement across the enterprise of what the software is supposed to do? For any given piece of functionality, how does one know if it's doing what it should be doing if what it should be doing is undefined? One employee thinks it should work one way, another employee thinks it should work another way. Who's correct?
Don't ask the developer, because I'm just going to tell you what I was told to implement and how I implemented it. To me, it's all correct (save for the occasional actual bug). Anything that physically works, it works as designed. In a system where the behavior isn't defined, there are by definition no defects. After all, a defect is where the system isn't doing what it's supposed to be doing. But if nobody knows what it's supposed to be doing, then that condition can never be met.
This leads us to another decision that was made by the business at some point in the past. Someone who was in a decision-making position decided that the behavior of the system should be defined by the developer(s). The behavior of the software, and the validation thereof, was entirely defined by and known only to someone who isn't there anymore. Intentionally or not, this was by design. Again, this doesn't change my recommendations today. It just makes it more difficult for them to follow my recommendations.
This is all well and good and has made for a nice little rant, but where does this leave us? How can we make this constructive? Simple. Learn from the past. The business is growing, the operational costs are growing, everything is going to get more expensive as time goes on because the impact to the business will have higher and higher dollar values. None of us can change how this software came to be. None of us can change the decisions that were made in the past. But we can make decisions right now.
Model the domain, build tests against that model. Then writing and maintaining the actual software becomes almost trivial.
They wrapped it up in a lot more jargon than that, tossing around terms like Risk-based Testing and the like. And they spent some time explaining to me why this is important. I get it, I see what they're saying. But that doesn't change my answer to the actual question.
Which parts of the system should we test?
All of them.No, it's not a cop-out to the question. It's my advice, and you can take it or leave it. The decision is yours. Now, the client explained to me that testing everything is impossible. It's a dream that can never be realized. To even attempt it would be cost prohibitive.
Why would it be cost prohibitive?Because testing the software is a slow manual process. We only have one tester and there are only so many hours in the day. And that tester isn't familiar with every nook and cranny in the system, he isn't going to be able to test everything. It's unreasonable.
Now we've touched upon the source of the problem. There are two key sentence fragments here which jump out at me and which identify the root cause of what's really wrong with the system, regardless of what changes or bug fixes we make:
- "slow manual process"
- "isn't familiar with every nook and cranny in the system"
Nobody here made this decision, and those who were involved in the past probably didn't even know they were making this decision. But it was made nonetheless. Testing is a slow manual process because they, as a company (not any one particular individual, I hope), decided that it should be a slow and manual process. (There's an old saying, "Never attribute to malice what can be explained by incompetence." I'm honestly not sure if either of those root causes apply here. I wasn't around when the system was developed, so I don't know what the story really was. But in this particular case, the root cause is irrelevant. The net result is the same.)
This decision that the company made doesn't change my recommendation. It doesn't change what my career has taught me to be a best practice. It doesn't change the fact that one should fully test everything one does in one's software. The only thing it changes is the cost of that testing to the business. And cost isn't my department. I'm just telling them what they should do, the fact that they chose (actively or passively) to do it in a prohibitively expensive way is another matter entirely.
In the past, had they sought my advice (or that of any consultant from my employer), the answer would have been the same. But in the past we may have been able to steer the design of the software to allow for more cost-effective testing. We'd have been happy to provide it. But in the past a decision was made by the business not to seek the advice of industry professionals. I can't change the past. But I won't let this one company's past change my mind about recommendations and best practices. I'm there as a consultant to bring these practices to the business. Not to change my practices to fit decisions the business made about software in my absence. My advice still stands.
Then there was that second troubling statement, whereby the tester isn't familiar with the system. That one frightens me even more, honestly. I can get the fact that one doesn't have automated tests. I can get the fact that QA and QC aren't in the budget. It's not what I recommend, but it's something I can at least understand. But not even knowing what one's software does? How can one even begin to justify that?
Isn't it all documented somewhere? Aren't there training materials for the users? Requirements for the software? Business designs? Technical designs? Even just an intuitive interface that purports to do what the business actually does?
This goes back to something I've been recommending since the day I got there. You need to model your domain. We can argue all day about what that means in the code and how to design the applications to make use of this information. But for the business this concept is of paramount importance. If you want your software to do what it needs to do, you need to define what it needs to do. Anything which deviates from that definition is a defect in the software. That definition is the specification for the software. It's the training manual. It's a description of what the business does. You should know what your business does.
If the tester doesn't know what the software is supposed to be doing, who does? Is there even agreement across the enterprise of what the software is supposed to do? For any given piece of functionality, how does one know if it's doing what it should be doing if what it should be doing is undefined? One employee thinks it should work one way, another employee thinks it should work another way. Who's correct?
Don't ask the developer, because I'm just going to tell you what I was told to implement and how I implemented it. To me, it's all correct (save for the occasional actual bug). Anything that physically works, it works as designed. In a system where the behavior isn't defined, there are by definition no defects. After all, a defect is where the system isn't doing what it's supposed to be doing. But if nobody knows what it's supposed to be doing, then that condition can never be met.
This leads us to another decision that was made by the business at some point in the past. Someone who was in a decision-making position decided that the behavior of the system should be defined by the developer(s). The behavior of the software, and the validation thereof, was entirely defined by and known only to someone who isn't there anymore. Intentionally or not, this was by design. Again, this doesn't change my recommendations today. It just makes it more difficult for them to follow my recommendations.
This is all well and good and has made for a nice little rant, but where does this leave us? How can we make this constructive? Simple. Learn from the past. The business is growing, the operational costs are growing, everything is going to get more expensive as time goes on because the impact to the business will have higher and higher dollar values. None of us can change how this software came to be. None of us can change the decisions that were made in the past. But we can make decisions right now.
Model the domain, build tests against that model. Then writing and maintaining the actual software becomes almost trivial.
Thursday, September 1, 2011
Your Code Has No Value
I run into this a lot with clients. They have a lot of code and they want us to do something with it. Clean it up, add features to it, re-implement parts of it, etc. And while I've yet to come across a codebase which was designed to be updated (that is, a codebase which isn't held together by duct tape and bailing wire), I've consistently come across management who clings to that code with all the fervent zeal of an old-timey miner clinging to a nugget of gold.
But the code itself isn't even an asset. It is, in fact, a liability. The business knowledge is an asset. The tests which validate the code are an asset. The people who understand the business are an asset. The intelligence behind the code is an asset.
The code itself is forever suspect. After all, when was the last time you encountered an enterprise software ecosystem which had no bugs? Or wasn't missing any features? It's guaranteed to happen. And the only way to really solve those problems is to have that business intelligence. Sure, you can address various little bugs and problems with various little patches to the code. That is, you can correct code by adding more code. But what are you gaining when you do that? You're just piling on more code. The problem wasn't solved with a business-intelligent design, it was just compounded.
For example, let's say you have a data element in your system which we'll call Widgets. For any given Widgetman, you can have multiple Widgets. So without any further understanding beyond that, the developer has a Widgetman table and a Widget table where the latter has a foreign key to the former. Then the application just has some simple forms over these tables. No business layer, just forms and data. Notice... no business intelligence.
Now there's a problem. Some users are complaining that there are duplicate Widgets. Apparently there's a rule that for any given Widgetman there shouldn't be identical Widgets. They can't discern between the two and, to the business, it doesn't make any sense. So a bug is filed. (Well, there's a pretty significant argument here as to whether or not it's a "bug." After all, it met the requirements. Nobody in the business provided any business intelligence. The "rule" that isn't being followed in the code wasn't known or understood, at least not by anybody involved in writing the code/tests/etc.) The code isn't doing what the business wants.
This is, of course, an overly-simple example. If this was all the system did then it would be a simple fix. But let's assume for a minute it's a very complex system. Let's assume that adding a business logic layer is major surgery, surgery to the point of nearly re-writing the entire system. So what do you do? Well, you're a professional software developer. You know how the system should be designed. The client asks how to fix the problem, and you present options.
One option is to add a business logic layer. Models, services, however it should be architected. The point is, you've identified that there's no business intelligence built into the software and it really needs to be. So you lay out a high-level plan for major surgery. Included in this option is some re-engineering of the data persistence so that this problem can't happen. After all, the business logic is only responsible for the data in motion. The database is responsible for the data at rest. Both of them need to have the business intelligence baked into them.
The other option is to add some code to either the form or the data access layer to check for a "duplicate" (which doesn't actually meet the definition of "duplicate" in the system, which is governed only by the table's auto-generated primary key, but instead is a one-off custom definition defined somewhere different and less intuitive than the primary key). The form would then not allow the user to save a duplicate.
(A third option, and one I've seen more often than I'm comfortable admitting to the universe, is to write a script that a support tech runs against the database to detect duplicates and delete them. So someone would just occasionally run this script and manually delete "duplicate" records. In this case there is no business intelligence anywhere except in the head of the support tech who just knows to do this from time to time.)
That first option is a tough sell. In fact, I don't think anybody's ever bought it. But it's not a tough sell because of the work involved, or the time it would take, or anything immediately concrete like that. Anything can be budgeted if it makes sense, and it certainly makes sense to understand the document the business process and intelligence and codify it in an application.
It's a tough sell because that system that's already in place has already cost the client money. They paid somebody to write that. So somewhere in the business there's a spreadsheet which tracks how much they paid and how much it saves them on operational costs, which feeds into an equation whereby they've calculated how long that code has to continue to be used in order to break even on that past expense.
They've already trained their users not to enter duplicates. (This was an operational cost.) They have a support tech who addresses the issue when it happens. (This is an operational cost.) They hire you to do the second option above. (This is an operational cost.) Later developers will have to find and understand this one-off code and will have to re-learn this lesson in future enhancements, since it's not intuitively designed. (This is an operational cost.) After all, the business knowledge isn't captured anywhere. There's just some random conditional either on the form or in the data access layer. Someday the client may have someone write another app on top of those tables, and that app won't have that logic. Someday the client is going to tell another developer that "in our system a Widgetman can't have multiple similar Widgets" and that developer is going to look at the code and reply with a meek "um... ya they can." (This is an operational cost.) And so on.
Yes, the code they have right now did cost them money. But that doesn't mean the code itself is valuable. The code is lacking in business intelligence. It doesn't represent the business process. The code cost money, but what value does it have to the business? The first option above identifies the business knowledge and codifies it. In documentation, in code, in data, etc. It creates something of value. The second option adds no value to the system. It just continues with the status quo, costing less money in the short term but improving nothing. The third option removes value from the system by doing nothing more than adding an operational cost (to counter the operational cost saved by having the software there in the first place... maybe not entirely but it adds up when combined with other similar support issues).
Perhaps you'll be one of those lucky developers where the client actually agrees to the first option. Maybe that spreadsheet is close enough to break even at this point that they can justify it to themselves. Or maybe the operational cost is bad enough that they just need to do something drastic to show that they're trying. Or maybe you're just a damn good salesman. Either way, it's time to write some new code.
Is this new code going to have business value? Is it going to be an asset to the business? Come on, you've been writing software for a long time. You know as well as anybody that even your own code is crap compared to what you'll be doing a year from now. Someday someone is going to try to sell this same thing to the same client to replace your code. So even when you re-write it and "do it right this time," write it knowing that it will be replaced. Separate those concerns, S.O.L.I.D.-ify those designs, etc. (I should probably write a post about that at some point too, sort of a "Part II" of this post. But that's for another day.)
"Source code is the liability that corresponds to the asset of the program. Testability decreases the liability and increases the asset." - Robert C. Martin (Uncle Bob)
But the code itself isn't even an asset. It is, in fact, a liability. The business knowledge is an asset. The tests which validate the code are an asset. The people who understand the business are an asset. The intelligence behind the code is an asset.
The code itself is forever suspect. After all, when was the last time you encountered an enterprise software ecosystem which had no bugs? Or wasn't missing any features? It's guaranteed to happen. And the only way to really solve those problems is to have that business intelligence. Sure, you can address various little bugs and problems with various little patches to the code. That is, you can correct code by adding more code. But what are you gaining when you do that? You're just piling on more code. The problem wasn't solved with a business-intelligent design, it was just compounded.
For example, let's say you have a data element in your system which we'll call Widgets. For any given Widgetman, you can have multiple Widgets. So without any further understanding beyond that, the developer has a Widgetman table and a Widget table where the latter has a foreign key to the former. Then the application just has some simple forms over these tables. No business layer, just forms and data. Notice... no business intelligence.
Now there's a problem. Some users are complaining that there are duplicate Widgets. Apparently there's a rule that for any given Widgetman there shouldn't be identical Widgets. They can't discern between the two and, to the business, it doesn't make any sense. So a bug is filed. (Well, there's a pretty significant argument here as to whether or not it's a "bug." After all, it met the requirements. Nobody in the business provided any business intelligence. The "rule" that isn't being followed in the code wasn't known or understood, at least not by anybody involved in writing the code/tests/etc.) The code isn't doing what the business wants.
This is, of course, an overly-simple example. If this was all the system did then it would be a simple fix. But let's assume for a minute it's a very complex system. Let's assume that adding a business logic layer is major surgery, surgery to the point of nearly re-writing the entire system. So what do you do? Well, you're a professional software developer. You know how the system should be designed. The client asks how to fix the problem, and you present options.
One option is to add a business logic layer. Models, services, however it should be architected. The point is, you've identified that there's no business intelligence built into the software and it really needs to be. So you lay out a high-level plan for major surgery. Included in this option is some re-engineering of the data persistence so that this problem can't happen. After all, the business logic is only responsible for the data in motion. The database is responsible for the data at rest. Both of them need to have the business intelligence baked into them.
The other option is to add some code to either the form or the data access layer to check for a "duplicate" (which doesn't actually meet the definition of "duplicate" in the system, which is governed only by the table's auto-generated primary key, but instead is a one-off custom definition defined somewhere different and less intuitive than the primary key). The form would then not allow the user to save a duplicate.
(A third option, and one I've seen more often than I'm comfortable admitting to the universe, is to write a script that a support tech runs against the database to detect duplicates and delete them. So someone would just occasionally run this script and manually delete "duplicate" records. In this case there is no business intelligence anywhere except in the head of the support tech who just knows to do this from time to time.)
That first option is a tough sell. In fact, I don't think anybody's ever bought it. But it's not a tough sell because of the work involved, or the time it would take, or anything immediately concrete like that. Anything can be budgeted if it makes sense, and it certainly makes sense to understand the document the business process and intelligence and codify it in an application.
It's a tough sell because that system that's already in place has already cost the client money. They paid somebody to write that. So somewhere in the business there's a spreadsheet which tracks how much they paid and how much it saves them on operational costs, which feeds into an equation whereby they've calculated how long that code has to continue to be used in order to break even on that past expense.
They've already trained their users not to enter duplicates. (This was an operational cost.) They have a support tech who addresses the issue when it happens. (This is an operational cost.) They hire you to do the second option above. (This is an operational cost.) Later developers will have to find and understand this one-off code and will have to re-learn this lesson in future enhancements, since it's not intuitively designed. (This is an operational cost.) After all, the business knowledge isn't captured anywhere. There's just some random conditional either on the form or in the data access layer. Someday the client may have someone write another app on top of those tables, and that app won't have that logic. Someday the client is going to tell another developer that "in our system a Widgetman can't have multiple similar Widgets" and that developer is going to look at the code and reply with a meek "um... ya they can." (This is an operational cost.) And so on.
Yes, the code they have right now did cost them money. But that doesn't mean the code itself is valuable. The code is lacking in business intelligence. It doesn't represent the business process. The code cost money, but what value does it have to the business? The first option above identifies the business knowledge and codifies it. In documentation, in code, in data, etc. It creates something of value. The second option adds no value to the system. It just continues with the status quo, costing less money in the short term but improving nothing. The third option removes value from the system by doing nothing more than adding an operational cost (to counter the operational cost saved by having the software there in the first place... maybe not entirely but it adds up when combined with other similar support issues).
Perhaps you'll be one of those lucky developers where the client actually agrees to the first option. Maybe that spreadsheet is close enough to break even at this point that they can justify it to themselves. Or maybe the operational cost is bad enough that they just need to do something drastic to show that they're trying. Or maybe you're just a damn good salesman. Either way, it's time to write some new code.
Is this new code going to have business value? Is it going to be an asset to the business? Come on, you've been writing software for a long time. You know as well as anybody that even your own code is crap compared to what you'll be doing a year from now. Someday someone is going to try to sell this same thing to the same client to replace your code. So even when you re-write it and "do it right this time," write it knowing that it will be replaced. Separate those concerns, S.O.L.I.D.-ify those designs, etc. (I should probably write a post about that at some point too, sort of a "Part II" of this post. But that's for another day.)
"Source code is the liability that corresponds to the asset of the program. Testability decreases the liability and increases the asset." - Robert C. Martin (Uncle Bob)
Subscribe to:
Posts (Atom)