And it is distributed objects all over again
I just got a comment on my previous post, which included this statement. This is important enough for a post. We are basically talking about abstract the persistance layer completely. My thoughts about that are:
Take a look at the XyzDataSource troubles that exist now.
And Stiiiff responded:
This is really the wrong way to go, in my opinion. It seems like recently I am hammering on the issue of leaky abstractions, but this is important. What Stiiiff suggest is a leaky abstraction by defination. There is a world of difference about the tradeoffs that I need to do for local and remote transactions, for working over hierarchical data stores vs. relational ones vs flat ones. You really do care if a transaction is executing over a remote database and the response time suddenly grows by two orders of magnitude.
What you would end up in this cases is that you would have to really understand the provider that you are using, and exploit the provider specific features to be able to meet your requirements. This basically completely defeat the whole point of having a provider model in the first place. If I really need to know everything about both the underlying implementation and how the provider behave with respect to it, just to be able to do my work, I am in a losing situation.
I would much rather code directly against the underlying implementation. I wouldn't have to understand the provider in this case.
Just look at the whole fiasco about transperant distributed objects "Yes, you can just change the configuration and suddenly the user.Name property is executed on another machine, no code changes required." Except that there are code and design changes required, and I want the API to reflect that, rather than give me an opaque surface that I would have to dig into to do my job.
Comments
Thank you! That's exactly what I said walking out of the presentation on Linq for Entities that we saw yesterday. There's a tremendous amount of complexity in that thing that's caused by trying to abstract so many disparate types of data sources, when all I want is just a very good O/R mapping experience.
I think they should be abstracting the data source at a coarse grained level like "TradeRepository", not trying to recreate the OLEDB driver wackiness on a grand scale.
Even if the abstraction manages not to leak, the over abstraction ends up making the thing harder to use and understand.
Linq for entities looks hard to use.
Configuring it by hand will be complex and unfriendly. We can be glad that Microsoft is working on VS.NET wizards and tools for their OR/M.
Cheers.
No, don't be glad. Things will go wrong, and you will need to fix it.
As it stand today, I am seeing a very fragile picture, where only the tools can handle it, and the minute you leave the tools, you can't do anything more.
Actually I agree with you.
I was ironic. This is exactly what I am afraid of: to have the tools but as soon as I encounter a bug or need to find a workaround, I will meet a mess of configuration files and code.
And wizards are great for beginners or to come up with something quickly but you don't learn how to maintain a code you didn't wrote.
What do you think of this post, http://web.archive.org/web/20060503213513re_/www.matshelander.com/Weblog/DisplayLogEntry.aspx?LogEntryID=80, which argues in favour of choosing an ORM abstraction which will, inevitably, leak sometimes.
@John,
He is not saying anything new. Here is a post about me getting into leaky abstraction with NH:
http://www.ayende.com/Blog/archive/2005/11/30/8074.aspx
I agree with him.
The difference is that NHibernate isn't trying to hide the underlying details from you. You can map easily what each operation NHibernate does to the SQL counterpart.
Comment preview