The Visible Long-Term Costs of NOT Doing it Right
Jeff Brown has a good post about in response to my No Broken Windows post.
My beef is that the absolute cost differential Ayende cites in this case is too small to be meaningful. The difference between a 1 hour task and a 4 hour task is 3 hours of sleep later in the week.
Now do the same math over ten different features, what do you get? (It is a fallacy, actually, doing it ten times would give me better chance to optimize my way, with no such way in the hackish way).
Quite simply, it is within the noise margin of software task estimation. What's not clear is that doing the 4 hour task can have ripple-down effects that are orders of magnitude larger. Choosing the 4 hour task may commit you to an endless cycle of choosing 4 hour tasks.
And here is where we agree, except that my conclusion would be that choosing a 1 hour task will commit you to endless cycle of maintaining code that isn't meant to be maintained. Jeff's example is doing reports, where you can either:
Since you only have one report to display, you decide to do it all in application logic. It's beautiful. So that you don't repeat yourself you reuse existing code for validating input, getting the data and calculating summaries. Moreover, you render the report using the same UI infrastructure and conventions used everywhere else so you can deeply integrate the report with the rest of the UI. The result looks great and feels very polished.
Or you can:
You can bolt on SQL Server Reporting Services or Crystal Reports to generate the report. Then you can easily offer report downloads in CSV, XLS, XML and PDF which you believed the user will appreciate enough to be worth spending the extra 15 minutes on it. The downside is that no matter how you do it, the reporting feature will probably be a wart in the code and look like a wart to the user. However, let's suppose it's faster to do it this way because everything is already at hand.
Wart to the user I can tolerate, sort of. I don't like it, but I can tolerate. A wart in the code is something that I will live with, if I really have to, and if it is an isolated incident. There are consequences for each of those, as Jeff put it well. For the quality code, the result was:
On one project, I decided to keep everything clean and beautiful. It looked great and shiny and only took me 4 hours to implement a report that got sent via emails. And then I got the request to provide a similar report in the UI with download support. Crud. Oh, and there's this wish list of 5 other reports too. The cost per report was tremendous. Each report was specially tailored for usability. And there is no support for downloads. It was more polished but no one really cared!
And for using existing reporting engine:
On another project, I bolted on SQL Server Reporting Services. Oh the pain! We had to munge configuration files to link in custom assemblies and it was a real nightmare getting the automated deployment to work. We wrote an abstraction layer to avoid coupling the application too tightly to Reporting Service but of course it's leaky. The reports look awful next to the rest of the site and we can't enhance them with AJAX or anything. Every now and then the house of cards comes a-tumbling down. But you know what? We now have 40 or so different reports available for viewing or download.
Now that I have finished ripping off Jeff's post, let me get to my point. Making those choices has a profound impact on how maintainable is the application, and as Jeff mentioned, there are points to each side. I really don't like stuff that make it harder to do the right thing, and that "Every now and then the house of cards comes a-tumbling down".
I have no idea what the initial cost of going with Reporting Services was, but I would assume that it was decidedly non trivial from description provided. At my current project, we are now in the stage of building reports, and after careful consideration of the pain that Reporting Services has brought us in the past, we went with the custom build approach.
The reports consist of graphs, tabular data and exporting to Excel, so there is a lot there to build. What we have found is that for the most part, we can utilize our existing architecture to build it rapidly. It is no different from building the rest of the application. I have about 15 - 20 reports to build, and we have mostly finished them. (Actually, Excel export was the easiest part, we use CarlosAg Excel Xml Writer Library, which is really good solution).
The overall architecture is:
- Controller gather and massage the data to a format that a view can immediately use.
- One of several views gets the data and transforms it to the report.
Yes, we write the view code to do that, but that is standard UI stuff that isn't really hard to write or comprehend. Most of the challenge in the reports are in getting the data efficiently (Muli Query really helps there).
Most of our reports usually requires something a bit beyond the usual tabular data. Undoubtedly it is possible to get it working with most reporting engines, but I have found those to be really unfriendly programming wise, and really obtuse when it comes to stuff that really should be simple.
In summary, you really should consider those choices, and if you find an existing way that isn't wart-full, do consider using that. Jeff's suggestions are a good one, but do consider the other side of the coin as well.
Comments
In the project I am working on we recently adopted NFop (http://sourceforge.net/projects/nfop/) for reporting. NFop uses XSL stylesheets to produce PDFs. There has definitely been a learning curve (not sure I like XPath too much), but bottom line we really needed to use business objects as the data source, and this seemed like the best way. (We use DTOs actually, generated from business objects). The other tools we looked at seemed optimized for direct database access, which would not meet our needs very well. So far, the only issue is that NFop seems to be a bit of a hog.
Anyway, just wanted to share this option.
I don't think every feature should be polished and refactored to the max on every occasion (YAGNI). However, quick and dirty by default is a much more dangerous approach.
I consider the time spent on developing a feature a tiny fraction of the time it will be maintained and so not a major issue in this discussion. In any case, if every report you write using you own infrastructure takes longer than SSRS you are probably repeating yourself (so I hope this is not the case for you).
I've tried many solutions to this over the years. The most recent uses the same similar architecture as yours, with the addition of a data adapter that can take the data from the controller and expose it as a multi-table ADO dataset (using reflection).
Since 3rd party reporting libraries love DataSets, the report view is simple (I use DevExpress reporting, because they support multiple detail sections without the need for subreports).
So I have the best of all worlds. Yay for me and my "ClassDataAdapter".
Downsides? None yet. But we've only done about 4 reports this way.
Yup. Always consider both sides! ;-)
Comment preview