Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,565
|
Comments: 51,184
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 271 words

It seems like no one is ready to take my Linq Challange, so I decided to expand it a bit and meet my own standards. I decided to implement the challange in Active Record, since it is much faster to work with than NHibernate. Note that this is just the object model and the mapping, nothing more.

Overall, this is 800 lines of code and 20 lines of configuration, and I think that it took about three to four hours to build this demo.

Here is the object diagram:

OhMy.png

And the database model:

database.PNG

You can download the sample project here, I included a SQL Script to create the database in the file.

Just to note, this is not a toy sample, this is a complex model that is capable of expressing very annoying business requirements. I added a bit of recursive rules, just to make it a bit more realistic. The real project has a lot more stuff to make it easier to work with the model, but in essense, this is the way that I built the project.

Just to clarify, I am truly interested in seeing other solutions to this problem, preferably not ones in NHibernate / Active Record. Mainly because I think that this is complex enough to be a real world problem and not something that can be solved with demo code.

Code complexity

time to read 4 min | 733 words

If we are talking about code comments, here is a piece of code that I don't think should deserve a comment:

public IEnumerable<RuleResult> Validate(Rule rule, DateTime date)

{

       if (false == this.IsInRange(date)) (Update: Moved to an explicit method because it is obviously not working)

              yield break;

       yield return rule.Validate(this, date);

       foreach (EmployeeSnapshot snapshot in EmployeeSnapshots)

       {

              if (snapshot.IsInRange(date) == false)

                     continue;

              IEnumerable<RuleResult> enumerable = snapshot.Validate(rule, date);

              foreach (RuleResult result in enumerable)

              {

                     yield return result;

              }

       }

}

What do you think?

time to read 6 min | 1137 words

Karl Seguin has posted MS Consulting : One Consulting Company To Rule Them All, which I read with a sense of sinking horror.

The problem with software consulting firms is that their incentives likely don't line up with their client's...

[some paragraphs talking about how a consulting company can screw their clients...]

Enter Microsoft.

Assuming we are strictly talking about Microsoft technologies, Microsoft is best positioned to solve the problem.

I work in a consulting company (which is also a Microsoft Gold Partner) so I am probably biased.

Karl then goes on to the really big issue with this suggestion:

Of course, there are flaws with my approach. First, it assumes that Microsoft Consulting is able to deliver quality products, hire quality developers and properly manage them. ... Consultants would likely be pressured to push Microsoft technologies that really aren't necessary (i.e, build something for InfoPath and require the company to buy 3000 copies of the program).

I can speak from second-hand experiance with having to deal with MS consultant "advice". It consist of "use [only] Microsoft products". I recently had to battle against using SharePoint and BizTalk in a project where they are completely the wrong tools for the job. And I had to explain to management (about 9 months ago) that no, using DLinq is not going to be a viable solution for a long time yet, because a MS Consultant told them that this is the One Microsoft Way to do data access from now on.

I have a big problem with the tendecy to go with all Microsoft (and only Microsoft) solution when there are often better alternatives around. I have yet to find the Microsoft consultant that will prefer using NUnit to MS Test, depsite some serious flaws in MS Test, for instance. Or suggest using log4net instead of bringing the who EntLib to a project (thereby increasing the complexity of configuration alone by an order of magnitude).

There are some crappy consulting firms out there, one of the thing that We! does is to provide code review services for companies that wants an independent review of the product that they are getting. Some of the code that I have had to go through is so nasty it is in the monthly WTF zone. Here is an actual quote from one of the mails I had after I did a performance review on a system:

The author of this piece of code has managed to achieve the unique state of being able to go very deep into the framework, while combining absulote cluelessness of the reasons why [a problem] occured. I have to say that I am impressed with the ability to dig so deeply to find the core issue, and amazed that at the same time, he managed to so completely missed the target. This code manages to be both ugly to use and the most inefficent way to do [a particular thing] that I have yet to see. All points for inventiveness, zero points for thinking.

I recognize that getting really bad products from consulting company is something that is not that rare. There are better ways to handle this than to trust that Microsoft would do a better job. The more likely scenario is that you would start spending a lot more money of licenses (and training/consulting about how to configure/administer/manage your new software) than before.

I currently have a system in production that is using SQL Express, preciesly because the data the application is managing is small, and can be purged on a regular basis. This meant a drop of $6,000(!) in the project price. What do you think a MS Consultant would have choosen? Would it have served the client's interest better?

Karl suggests that it is easier for the customer to detect a sell pitch in the style of "you should use BizTalk" than to spot getting a really crappy product. I would say that the reverse is true. If a manager is unable to have independant code reviews, or unwilling to head their advice, there is no gurantee that they will be able to understand whatever BizTalk (or SharePoint, or MS CMS, or the like) is a good solution for the problem at hand.

I personally know of at least one BizTalk installation that I could replace with about three days of work, and get more maintainable code, better scalability, far better robustness, etc. In that case, the client certainly hasn't benefit from what BizTalk has to offer.

I can tell you that I (and We!) take a great deal of pride in what I create. I tend to not ship crappy code on a aesthetic basis. I had to go back to old projects, for code harvesting, additional development, bug fixes, etc. Producing crappy code means that I go home depressed, and I intend to do "this computers stuff" for a long time.

The core problem still exists, of course. One of the solutions is to have someone from the client side (either in-house or a third party) review the code and make sure that it matches the required standards. In most of my there is such a person, and it is my responabilities to walk them through the code and explain stuff (and have heated arguments ;-) ). The other is to have some sort of a support contract, which may include some clauses about fixing bugs in the application, acceptance tests, etc.

To sum it up, I think that it is a naive approach at best.

On Code Comments

time to read 7 min | 1343 words

dJeff Atwood had a post about code comments, which I am completely agree with. One of the comments to the post caught my eye, talking about the assumption made when commenting:

It is not always feasible to have a programming guru on hand to fix every issue. Not everyone has the same skill set. Sometimes companies are stuck having to maintain code in languages their current staff aren't well-versed in. Gurus aren't available either at all in some areas, or for the money some companies have alotted for their IT staff.

I had posted before about this topic, but I think that this is an interesting take on the topic. When I comment, there are some assumptions that I make about the person reading the code:

  • That s/he knows the language, or is capable of learning it, for instnace, I tend to use this quite a bit:

    public string CacheId
    {
      get
      {
         return ( ViewState["CacheId"] ?? (ViewState["CacheId"] = Guid.NewGuid()) ).ToString();
      }
    }

    If the dev reading my code don't know what the null coalescing operator is for, or understand what expression chaining is doing here, then there is not chance they can follow the rest of the code. I don't write my code to be written by gurus, but I will not limit myself to using the basic features of a language just because a newbie will not understand them. Anonymous delegates is another thing that I like to use in many places With.Transaction(delegate), for instnace.
  • That s/he have at least a rudimetry understanding of the domain and the model. This is much harder than merely understanding the language/framework, by the way. If we will return to the HR model for a second, here is a piece of code that give the employee a 10% raise:

    SalarySnapShot raisedSalary = employee.At(startDate).Salary.Copy(startDate, endDate);
    raiseSalary.Amount *= 1.1m;
    emloyee.Salaries.AddOccurance(raiseSalary);

    There is a lot going on here. For instance, Copy will return a snapshot with a modified validity dates, and adding an occurance in the salaries will adjust the other salaries dates to fit the new salary (in itself a very complex problem). To me, the code is perfectly clear, and it is not hard to follow techincally, the problem arise when someone that doesn't understand the temporal model in use tries to follow it. I had a lot of problems coming to grips with it myself. I could put a comment there explaining why I need a temporal copy and what AddOccurance is doing, but this is a coment that would need to be repeated each and every time I touch a temporal object. I consider repeated comments a nasty code smell.
  • That they understand the technology:

    With.Transaction(delegate
    {
       Employee employee = Repository<Employee>.Find(empId);
       SalarySnapShot raisedSalary = employee.At(startDate).Salary.Copy(startDate, endDate);
       raiseSalary.Amount *= 1.1m;
       emloyee.Salaries.AddOccurance(raiseSalary);
    });

    This is a piece of code that will save the copy even though we never call Save() explicitly. I will sometimes throw a Save() anyway, just because it is clearer this way, but not always.

So, what do I comment?

I comment why I am not doing things when the situation is unique:

if(start > end )
  end = end.AddDays(1);// Need to make sure that the end date is after the start date, even though we are only using the time portion.

When there is a hidden catch:

group.AddUser(currentUser);//Will propogate the addition to all linked groups automatically.

Bug fixes:

employee.AddNote(newNote);
//We have leaking this pointer here, because the AddNote set newNote.Employee = this.
//this can cause problems when we group things by employee, so we do this explictly, to set to the proxy.
newNote.Employee = employee;

time to read 1 min | 147 words

I am following Andrew's blog, and it looks like MbUnit is not only being actively maintained again, but it is starting to get some really interesting features. I stopped using MbUnit when I needed some tools that were NUnit spesific and I got tired from porting them to MbUnit. Since then I had many occuations where I was frustrated by features missing from NUnit that I loved in MbUnit.

I am supposed to start a new project soon, where I am going to emphasize unit testing to a much stronger degree than we currently do (we test the business logic, but that is about it, and 95% of the tests were written by me). The way it is looking now, I think that we will use MbUnit for unit testing. With TestDriven.Net, it doesn't matter anymore...

time to read 2 min | 267 words

Some random thoughts:

  • I got to tell clients, "I can't talk right now, I'm on the shooting range..." with real shots in the background.
  • I got to meet some friends I haven't seen since I left the army.
  • I only had about 2 hours of computer usage in the whole five days (compared to ~10hrs daily that I usually have).
  • It is OK not to put the phone on mute and not return calls for a couple of days.
  • Shoting is always a pleasure, of course.
  • Proving that I am replacable resource at work.
  • Five days and not a single thought about code... The last time that much time has passed, I was busy literally 24/7 in operation Aqua-White, and that was two years ago.
  • Learning to appriciate the really simple things, like sitting.
time to read 2 min | 359 words

Here are the mandatory pictures: 

temp343.PNG

I learned several things last week:

  • If you go to the desert in December, it is going to be cold.
  • It is amazing to see the milage that the army can get out of a poor old jeap (נ"נ).
  • Riding on the back of a jeap older than me cross country hurts.
  • I actually has duties and responsabilities in the army - that came as a shock.
  • I forgot how much fun shooting is.
  • When I put on the uniforms I also put on a set on values under which it is reasonable for me to go to sleep in the middle of the freaking desert (in December), okay to sleep ~3 hours a night and work 20 hours a day.
  • It takes around three days to get back to a normal frame of mind.
  • It is amazing just how many friends I forgot that I had in the army, and how many ended up service in the same regiment as I do.
  • I took the laptop, the mp3 player and 6 books, I didn't even open the bag they were in all week.

Damn, I am glad to be back.

time to read 1 min | 113 words

Take a look at this, it is pretty long, but it contains a lot of stuff that frankly scares me.

In order to work, Vista's content protection must be able to violate the laws of physics, something that's unlikely to happen no matter how much the content industry wishes it were possible.

What is more scary is the amount of times I see "This cost is passed on to all consumers" in the document...

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  2. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
  3. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  4. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  5. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}