Leaving the relational mindset – RavenDB’s trees
Originally posted at 3/24/2011
One of the common problems with people coming over to RavenDB is that they still think in relational terms, and implicitly accept relational limitations. The following has been recently brought up at a client meeting. The problem was that they got an error when rendering a page similar to that:
The error is one of RavenDB’s Safe-By-Default, and is triggered when you are making too many calls. This is usually something that you want to catch early, and fail fast rather than add additional load to the system. But the problem that the customer was dealing with is that they needed to display different icons for each level of the tree, depending if the item was a container or a leaf.
Inside Raven, categories were modeled as:
{ // categories/1 "ParentId": null, "Name": "Welcome ..." } { // categories/2 "ParentId": "categories/1", "Name": "Chapter 2..." }
They had a few more properties, but none that really interests us for this post. The original code was pretty naïve, and did something like:
public IEnumerable<TreeNode> GetNodesForLevel(string level) { var categories = from cat in session.Query<Category>() where cat.ParentId == level select cat; foreach(var category in categories) { var childrenQuery = from cat in session.Query<Category>() where cat.ParentId == category.Id select cat; yield return new TreeNode { Name = category.Name, HasChildren = childrenQuery.Count() > 0 }; } }
As you can imagine, this has caused some issues, because we have a classic Select N+1 here.
Now, if we were using SQL, we could have done something like:
select *, (select count(*) from Categories child where child.ParentId = parent.Id) from Categories parent where parent.ParentId = @val
The problem there is that this is a correlated subquery, and that can get expensive quite easily. Other options include denormalizing the count into the Category directly, but we will ignore that.
What we did in Raven is define a map/reduce index to do all of the work for us. It is elegant, but it requires somewhat of a shift in thinking, so let me introduce that one part at a time:
from cat in docs.Categories let ids = new [] { new { cat.Id, Count = 0, cat.ParentId }, new { Id = cat.ParentId, Count = 1, ParentId = (string)null } } from id in ids select id
We are doing something quite strange, we need to project two items for every category. The syntax for that is awkward, I’ll admit, but it is pretty clear what is going on in here.
Using the categories shown above, we get the following output:
{ "Id": "categories/1", "Count" = 0, ParentId: null } { "Id": null, "Count" = 1, ParentId: null } { "Id": "categories/2", "Count" = 0, ParentId: "categories/1" } { "Id": "categories/1", "Count" = 1, ParentId: null }
The reason that we are doing this is that we need to be able to aggregate across all categories, whatever they are in a parent child relationship or not. In order to do that, we project one record for ourselves, with count set to zero (because we don’t know that we are anyone’s parents) and one for our parent. Note that in the parent case, we don’t know what his parent is, so we set it to null.
The next step is to write the reduce part, which runs over the results of the map query:
from result in results group result by result.Id into g let parent = g.FirstOrDefault(x=>x.ParentId != null) select new { Id = g.Key, Count = g.Sum(x=>x.Count), ParentId = parent == null ? null : parent.ParentId }
Here you can see something quite interesting, we are actually group only on the Id of the results. So given our current map results, we will have three groups:
- Id is null
- Id is “categories/1”
- Id is “categories/2”
Note that in the projection part, we are trying to find the parent for the current grouping, we do that by looking for the first record that was emitted, the one where we actually include the ParentId from the record. We then use count to check how many children a category have. Again, because we are emitting a record with Count equal to zero for the each category, they will be included even if they don’t have any children.
The result of all of that is that we will have the following items indexed:
{ "Id": null, "Count": 1, "ParentId": null } { "Id": "categories/1", "Count": 1, "ParentId": null } { "Id": "categories/2", "Count": 0, "ParentId": "categories/1" }
We can now query this index very efficiently to find who are the children of a specific category, and what is the count of their children.
How does this solution compare to writing the correlated sub query in SQL? Well, there are two major advantages:
- You are querying on top of the pre-computed index, that means that you don’t need to worry about things like row locks, number of queries, etc. Your queries are going to be blazing fast, because there is no computation involved in generating a reply.
- If you are using the HTTP mode, you get caching by default. Yep, that is right you don’t need to do anything, and you don’t need to worry about managing the cache, or deciding when to expire things, you can just take advantage on the native RavenDB caching system, which will handle all of that for you.
Admittedly, this is a fairly simple example, but using similar means, we can create very powerful solutions. It all depends on how we are thinking on our data.
Comments
Hi,
but same think can be done with another table which will store same thing in SQL DB as it is stored in computed index in Raven or I'm missing something in core?
A.
I'm probably too lazy, but when I run across this type of thing that is "read often, write rarely", I push this to the write layer - that is, I do my best to make a document (or a few documents) that has the tree already built as I would want it.
This is a bit of extra work when editing a category, but makes the display of the tree a no-brainer, and very fast.
Obviously, this can get complicated with stuff like per-user display restrictions or similar... but wherever I can push all the work to the infrequent operation, I do it. Disk space is cheap.
Ales, I was going to say the same thing. A Job that run's the SQL statement he mentioned and populates it into a "index" table.
Isn't that exactly what happens with the Raven solution? The map/reduce only updates on writes, and does take a bit of time to update during which you get stale results.
Philip, agreed. Seems to me that you want this in a single document. In SQL, I'd give each tree a family id so I could easily grab the whole thing into memory and do any necessary calculations on that.
I'd do that in Raven, too, but I'm not sure how best to do it. You could simply add the family id and an index on it, or you could add an aggregate listing the tree elements. I'm not sure how Raven would do with serializing or deserializing a tree of items of the same type. I'd worry that I would end up saving a copy of a subtree as a new item if it stored them nested, and if it didn't, it doesn't solve my problem.
Did you just tell me to go fuck myself?
Sorry, had to :)
Now that you said the answer I can get it, but I never thought this way. Relational DB can be slow, but the answer is much more simple
Fujiy,
I strongly disagree. What you have is years of experience with relational databases.
Ayende, you can be right.
I tried to do some testing with NoSQL DB´s, but always seems odd to me. The world seems so relational.
The first thing I thought when reading this post was:
"Why aren't they storing the whole tree in a single document?"
In fact, that's what I expected your solution to be. Now, the index solution is pretty nice, but did storing the entire tree in a document come up as a possibility? If so, what was the rationale for not handling it that way?
Nick,
You can't always do that. In this case, each category is actually a person in an organizational unit.
That makes sense. I figured it probably had to do with the particular case.
CELKO's balanced trees in Relational Databases. It's simple, fast and can select in SQL any fragment of the tree in 1 query, including the whole tree, independent of nesting depth.
Additionally, you can indeed use an index table, which is simpler, but has the side effect you have to keep it up to date. This can be done with a simple trigger, and the whole system allows you to select any fragment of the tree (or the whole tree) in one select statement, independent of nesting depth.
As someone already said, you can also do this client side on an O(n) algorithm using a dictionary and the knowledge that a child node has a higher PK than a parent node (if that is valid of course).
All in all, only naive developers who think a tree should be stored in a self-referencing table and can't be retrieved with a single select will have a problem with trees and RDBMSs.
Comment preview