Handling dependencies in a one assembly
There were several comments for my recent post about the disadvantages of creating a single assembly for a project, instead of the more common multiply ones.
The main one seems to be related to dependency management. How you can be sure that the UI isn't accessing the database, and other fun violations of layering.
I have to say that I don't really have much issue with that. I want good layering, but I find that code style in a project tend to keep consistent over a long period of time in any reasonable time. As such, once you have sufficient size, it will more or less manage itself.
That isn't going to be an acceptable answer for a lot of the people that want to have more control over that, I know. There are two answers to that, the first is from the human perspective, the second is from enforcement perspective. Let us deal with the first one first.
Make it easy to do the right thing. I seem to be saying that a lot lately, but this is a core concept that all developers needs to assimilate.
If loading a user from the database is as simple as:
usersRepository.Load(userID);
Then you can be certain that no one will try to write this:
using(SqlConnection connection = new SqlConnection("Data Source=LocalHost;Initial Catalog=MyAppDB;"))
{
connection.Open();
SqlCommand cmd = connection.CreateCommand();
cmd.CmmandText = "SELECT * FROM Users WHERE Id = " +userId;}
Yes, it is that simple. And yes, this assume non malicious people in the team. If you have those, termination is recommended. It is your choice whatever it would be of them or their employment.
Enforcement is easy as well. Get NDepends and setup the appropriate CQL queries. Something like: error if class in "MyApp.UI" references "MyApp.Infrastructure".
Comments
I prefer to have ( apart of default one assembly target ) a NAnt target that emits several assemblies for each logical layer and test the result.
As the logical layers are in separated folders it's easy stuff for <csc /> task, unless there is some WPF/... msbuild magic involved.
And having a solution with 30 or more projects is a really bad idea, you can't ignore the fact that Visual Studio performance suffers big time.
It is still much easier to write "select * from table" in an ObjectDataSource and bind it than it is to write a view, a presenter, and a repository :)
And as Udi said in the other thread... the first code review I do is to check the Refernces section and ask why XYZ is in there ...
"And as Udi said in the other thread... the first code review I do is to check the Refernces section and ask why XYZ is in there ... "
Yeah thats what we do too.
Having said that the namespace approach is also valid:
http://www.theserverside.net/tt/articles/showarticle.tss?id=ControllingDependencies
I personally think that, with the way VS is currently structured, having multiple project and if necessary multiple solutions is a good way of working. Sure it increases overall build time and so on but there are other issues including complexity, if you have everything in one project then working within that project is quite likely to get unmanageable.
I think it might be worth specifying what I think some of the advantages of multiple projects are:
1) Explicit dependency management - Namespaces backed up by NDepend are probably an option, though not one I've tried. However at the moment projects are the obvious way of handling dependencies.
2) Build Time - If done carefully you can lower the build time for particular pieces of the system (especially if you use multiple solutions).
3) Modularization - If I mainly work in a Crm domain model and never go near the Finance aspects do I really want to see them in my project?
4) Management - If you have one massive project then it's quite possible that you'll be working on two bits at once, such as Model and Web or Model and Services. I find that this can get painful when you have a whole lot of stuff in one project.
5) Tests - If you have one big test project then its going to have a complex folder structure for a big app, again navigating around it will be hard as will switching from it to the appropriate part of the main project.
6) Multiple applications - Obviously if you have multiple apps using the same model you will need to break it out.
Thats just been my experience though, and of course multiple projects have their own issues. Having said that I've worked with big projects and smaller projects and so far I prefer the latter.
And we shouldn't forget about Internal methods in our classes. Having one assembly for the application will make internal methods visible for the entire application.
Don't get Oren started on Internal :)
Johanatan,
Abolish internal methds, problem solved.
Casey,
Not the way I do it. Just handling paging and ordering with this is a major issue.
Not the way I do it. Just handling paging and ordering with this is a major issue.<<<
Not the way you or I do it, but its still 100x easier ... you really do get to work with the elite of develoeprs ... most .Net developers I meet actually think this is the best way to do it, not only the easiest ... :)
I used to work with a bunch of junior guys.
You can bet that they run into the same issues as well, and you need only to rename a column once to show them why this is a really bad idea.
You need to introduce that to them, but that is all.
Talking of internals, I sometimes want to kill API makers for using GetExportedTypes (Castle.BatchRegistration) or similar approaches (MbUnit). I have internal classes and public interfaces in most of my layers -- this reduces API complexity and makes right choice easier. I feel that not only the right solution should be easy -- the metasolution of choice between solutions should also be.
But the reason is less important -- the important thing is that external libraries should not have to be hacked and patched to find internals if I want to.
And I do not really understand why internals are bad. There are a lot of cases when using some lower-level code for functionality that is already provided by the higher-level one is more complex, more problematic, and can lead to code duplication.
Now, the perfect developer instantly knows what classes/methods to use for his specific task. The real developers I worked with often choose what they found first, just because when they invent the way to use it, they stop searching.
And if code really wants to get some low-level functionality that is not currently available, it may mean that API got out of date and should be refactored. Or may be the code wants some common stuff other code uses. Then this stuff should probably be moved to the common area, not used from where it was originally.
For example I may have a folder in my project named QuerySupport that has some classes used only by queries. Let's think you want to reuse one class of these. It probably means that this class should not be in QuerySupport folder/namespace anymore. Of course, I could have put all these classes to less specific place at the very beginning, but then I will make a mess of my solution.
And I do not really understand why internals are bad. There are a lot of cases when using some lower-level code for functionality that is already provided by the higher-level one is more complex, more problematic, and can lead to code duplication.
Now, the perfect developer instantly knows what classes/methods to use for his specific task. The real developers I worked with often choose what they found first, just because when they invent the way to use it, they stop searching.
And if code really wants to get some low-level functionality that is not currently available, it may mean that API got out of date and should be refactored. Or may be the code wants some common stuff other code uses. Then this stuff should probably be moved to the common area, not used from where it was originally.
For example I may have a folder in my project named QuerySupport that has some classes used only by queries. Let's think you want to reuse one class of these. It probably means that this class should not be in QuerySupport folder/namespace anymore. Of course, I could have put all these classes to less specific place at the very beginning, but then I will make a mess of my solution.
Andrey,
That assume that you own all the code, and can change it if you need to.
Take a look at the BCL and see how many times what you want is so close, but it is locked under internal keyword.
SqlCommandSet, WCF's CreateChannel(Type), LinqExpressionVisitor....
Those are just a few that comes to mind
I do not remember, but I think I already touched this point. When designing a library, you can just open all internal classes, but it is a lazy solution. The best solution is to prefer public over internal everywhere the functionality can not be accessed otherwise, and design accordingly.
So if I had SqlCommandSet I do not need SqlCommandSet.LocalCommand. I would never need ADP or Bid no matter what, opening these will just increase complexity. Opening everything without appropriate grouping will scare the hell out of new .Net developers.
I am not really concerned about scaring develpers.
If they can't handle a few more classes, they are welcome to go back to program in Basic on Dos 1.0, with ~ 25 commands or so for them to deal with.
I recognize the need for private classes, by all mean encapsulate using that. But internal is evil in frameworks
I always thought the most important point of building a library is making developers' life easier. I really like the simplicity of NMock where I have Expect which is a facade to the whole API. Opening the internals should mean that you at least regroup the classes so that the lesser used ones do not pollute more used namespaces.
And I almost never use private classes, these are just too messy to work with -- and also they are untestable without Reflection (as compared to internals).
Andrey,
Rhino Mocks doesn't use internal.
It has a very rich API, but almost always, you are going through the well defined published interface (as opposed to the public interface).
If you need more, you can just add it.
It works very well.
Ayende, thanks for quoting NDepend.
For those who are considering the option of using NDepend to handle internal dependencies and layering into big projects, you can read an article I wrote about this subject:
http://www.theserverside.net/tt/articles/showarticle.tss?id=ControllingDependencies
The article bring several advantages on using few big projects instead of multiple small ones. Basically you get less cost:
Cost at development time
Cost at compile time
Cost at deployment time
Cost at runtime
On our particular case, VisualNDepend.exe is an assembly made of 35K lines of logical code (i.e more than 200.000 lines of physical code) splitted in more than 40 namespaces/components, and it is a real joy to eat our own dog food to maintain a good layering.
And also, VisualStudio takes between 1 and 5 seconds to compile all this big code.
Comment preview