Sharding vs. Having multiple databases
I was recently at a customer site, and he made a mention of sharding their database in a specific way, putting all of the user generated content on a separate server. I jumped in and told him that this isn’t sharding. And then I had to explain, both to the customer and to myself, why splitting things up in that way bothered me.
The basic idea is the same, we want to split the information over several databases to reduce the load on each individual server, and get higher capacity. We’ll use the Flickr model for this post. What the client suggested was:
And what I suggested was:
On the face of it, there isn’t much difference, right? But I really don’t like calling the first example sharding. From my perspective, this is merely splitting the data into multiple databases, it isn’t sharding. I’ll accept arguments about this still being sharding, but it doesn’t feel right from my perspective.
The main problem with the first option is that in order to actually do something interesting, you have to go to three different servers. On the other hand, by selecting a good sharding function, you can usually serve a single request completely from a single database.
That is important, because databases, whatever they are relational or RavenDB all include multiple ways for you to gather several pieces of information in an efficient manner. The common part about all of those efficient ways to gather multiple pieces of information? They all break down when you are trying to gather different pieces of information from different servers. That is leaving aside things like connection pooling and persistent connections, which are also quite important.
For example, loading an image with its user’s information and its comments would be the following three queries in the first example:
var image = imagesSession.Load<Image>("images/1");
var user = usersSession.Load<User>(image.UserId)
var comments = commentsSession.Load<ImageComments>(images.CommentsId);
Whereas with the second example, we will have:
session
.Include<Image>(x=>x.UserId)
.Include(x=>x.CommentsId)
.Load("images/1");
Other things, like transactions across the data set, are also drastically simpler when you are storing all of the related data on the same server.
In short, sharding means splitting the dataset, but a good sharding function would make sure that related information is located on the same server (strong locality), since that drastically simply things. A bad sharding function (splitting by type) would create very weak locality, which will drastically impact the system performance down the road.
Comments
Proper sharding is also more scalable.
Wikipedia refers to sharding simply as horizontal partition, i.e splitting the data rows. What you are doing is taking the business case into account, i.e. considering in what way the data is used and to subsequently improve efficiency of the solution in terms of network utilization. Maybe you could call it a "business shard" or a "functional shard".
Wouldn't another problem with the first model be to assume that each database will get equal usage? What if the user database is the hardest hit? Once you choose that model, what happens when the user database is overloaded?
Eventually you would have to drop to the second model, because you would realize how big of a mistake the first decision was.
Interesting. What about a situation where your images db is growing really big? In the first option you can setup a second images db when you feel that the first one is "full". Sounds useful to me.
Atleast in the first model you avoid too many joins.
Is that not called siloing or something?
There's an old presentation by youtube that talks about how they scaled in the face of ridiculous traffic. They ended up splitting by feature and required QOS so they could de-prioritise low importance functionality (e.g. comments, user accounts) while keeping video playback working.
Of course, they probably ended up sharding those silos too, but it's interesting that even the correctly sharded system isn't always enough - sometime you may to decouple features completely.
I think you're right, and the second architecture is more "sharding" than the first. But if you're looking for ways to shard, or if, for instance, customers A-H don't fit in one database - look at <a href="http://www.scalebase.com>ScaleBase. ScaleBase delivers a transparent database sharding solution, so you don't need to change your code to scale your database.
Amazon does this to build each page view and the performance seems to be fine.
Bob, this is a silly argument. You don't know the details of amazon application and infrastructure, but I'm sure they're not excessively wasting ressources. In the above example, chances are you will reduce the network-load to a third. I think this is a good argument.
@Oren: i can see the point in sharding, my doubt is how do you organize your shards in the context of shared data? I mean, if each user has his own images, that woudl make sense, you group them togheter in a shard. But what about data related to multiple users, for example? Do you have any particular advice on scenarios like that?
Phillip, Actually, no. You either do denormalization, or you do the joins in your code.
Njy, The most common example of shared data is one user commenting on another's user image. Flickr does that by duplicating that information, storing the comment on both shards. I agree that this is an approach that makes a lot of sense.
@Oren: oh, i see. Btw, as a general concept, i'm wondering if we're not going too much berzerk mode on this speed thing... i mean, duplicating a comment, that maybe is even editable in the future (think about stack overflow for example), and having to keep it in sync manually on updates... i mean, c'mon: the DB access is already blazingly fast like that (if you compare it to the old school rdbms), and in an high volume site you would probably use memcached/velocity/something-like-that too anyway. So having A) a couple more ms read time to access 2 different stores instead of B) having some data duplicated and the burden to keep in sync each copy... i think it would be preferable the A) apporach. Oh and, depending on the situations, it may even be possible to parallelize the access to the 2 or more different stores (while i'm getting the comments in a store, i'm getting the user data on another store) ... and that starts to become pretty interestaing...
Njy, What happen if one of the servers is down? What happen for network traffic perspective? How do you maintain transaction integrity? You have to consider a lot more factors than just a single request
@Oren: if one of the servers is down, it will happen exactly the same as if you are splitting data A-H / I-S / etc... some data will be unavailable. I mean, i'm not against sharding, on the contrary! I'm just trying to understand the right reasoning behind it, and i don't think that "what if a server goes down" is a good motivation.
The other reasons instead make more sense to me. In reality considering those details we should start discussing the server-to-server network architecture too, because for example in one of our customers we have 3 different network cards for each server, 1 for the outside communications and 2 for internal server-to-server communications, to make it faster to communicate locally in parallel (from the app server to 2 different concurrent db servers).
But then again... yeah, i think it is become all too complicated to talk about this here in a blog post's comment :-)
Anyway thanks for the discussion, helpful as always.
@Njy, In your case, any server down would impact ALL users. In my case, any server down would impact only the users on that servers, all the others can continue operations. That is a pretty important aspect.
@Oren: yeah, without any doubt. Probably it's just that from my point of view that would be unacceptable too (even considering that every page would contain data from more than one shard, probably, and that would make each page a mess anyway, if they can be still renderable at all) and usually that aspect is managed in other ways (dormient backup servers/vms or stuff like that). All in all i tend to not think to that as a requirement on which base the data tier achitecture, but maybe it may very well be that my sense of security for that area come from the fact that i'm not the sysadmin here, and i may take for granted some safe concepts that typically apply to rdbms scenarios, but do not apply in the world of nosql solutions.
Njy, What on earth would most pages contain data from multiple shards in the first place? That pretty much defeat the idea of splitting the data so you only touch a single server
@David
The second option is better for that. What do you do in the first option if the User database is full?
On the Second option you just change the A-H to A-D, E-H (or some other breaking point that makes sense data wise).
Hm, why you say "Sharding vs Having multiple databases" here? I.e. my question not related to Naming conversion, but related to "vs". I see many example when architecture and design done the way that you have BOTH sharding AND have multiple specific databases! It is make sense some times, sometimes not etc. Really depends on your requirements! So my vote -1 for "vs" :(
Just one example: you have so many users which username start from A :D And each user may have so many images uploaded to site, that you can't actually use one single server any more for both user / comments and images, even if you start using Users Ids range instead of Usernames :D for sharding ranges etc.
So it is completely make sense to have SEPARATE databases sometimes. It give you ability to scale in addition to "classical" sharding. It can also be required for security purposes, for high storage requirements (like FB have many databases in use like HBase, MySQL etc for different functionality and each one use own "sharding", "partitioning" or call it your name schema :D).
P.S. +1 to some people comments regarding possible network efficiency etc. It is all just to complicated to say here "vs" :)
Ruslan, That was the choice that was presented, and I used it. I agree that sometimes it does make sense to do it that way, but that is far rarer. It would generally be easier to keep sharding down until you got to managable pieces.
@Daniel Lang
Bob's an idiot? It's obvious you're a Microsoft stooge who thinks Amazon runs the entire site on a single 2008 R2 server with a single SQL Server back end. Man this site attracts a lot of fanbois.
@Daniel Lang
Ummmm don't see you telling Ayende he doesn't know what Flickr's configuration is. What a suck up.
I never used sharding in my life. And I never had to deal with users, comments and images to the extent that I would have nightmares about not being able to fit them all in a single database. And never wrote a map-reduce function. Does it mean that I'm old and useless now? Will I feel more cutting-edge if I start talking about sharding, scalability, nosql and map-reducing? Hope this is the right place to ask.
@Rafal
No, but it is useful to at least know these exists and what they roughly do. If you ever have a problem that they can solve, you will at least be able to consider them.
And don't start talking about them without reason. That way leads to the same destination as "let's always use XML".
Whilst I understand that sharding is great at increasing performance in some specific situations, and that there are many different ways of implementing sharding, I still don't understand how you'd get around the problem of having to query many different databases to be able to display one page of say, a social networking site.
Just as an example, take something like Twitter where users have followers as well as users that they follow. Say you wanted to build a page that for a particular user, would display all of the users they follow plus the users that follow them and some information about each of those users. If you were to shard the user table out as described in the 'proper sharding' way, you'd need to query many different databases, something which clearly isn't advisable and certainly won't increase system performance.
From my own personal experience, many sites face this problem. Any site that has relationships between the entities that are split into different shards may face this problem. With each iteration of a site, more and more information from different shards may end up needing to be displayed on one page and it really sucks to have to say 'sorry, can't do that' to non-techie co-workers who end up looking at you like you don't know what you're doing because they're thinking 'it can't be that hard' - and really, it shouldn't be. So aside from pushing back on change requests or solving this conundrum by duplicating data (which apparently Flickr does with user comments and I really feel is a step backwards in terms of maintainability), how can sharding work well for this sort of scenario?
Annie, You don't query other shards. Each user has their own "timeline" table, with all the entries that they need to show their own page. That is part of the reason that twitter goes down a lot, a single twit can can a massive amount of write.s
@Oren
Sure, that works if you know all of the data requirements for a site or even a page from the start. But again, from my experience, requirements change at the very last minute and before you know it you're being asked to show some data on a user's page that wasn't planned for, and therefore isn't in that user's 'timeline' table.
If you've never been in this situation, you're a lucky man!!
Annie, When you have new requirements, you get the data that you need from the other locations and put it in the user's database, in a "foo" location for the "foo" feature. You literally can't afford to call multiple databases in most scenarios
@Oren
Yes, I realize that you can't afford to call multiple databases - ask me how I know this. It ends up being dreadfully slow and requires all sorts of band aids to avoid (such as introducing a Redis layer on top)... not ideal!
But what you're saying is with this type of sharding, even for what seems like a small change (i.e. can you please show this user's follower's profile pic thumbnails and their associated taglines where you didn't before), you need database changes and data duplication / migration? This is precisely what turns me off sharding.
Annie, And your alternative to that is...?
And what you describe is by no means a small change.
@Oren
That's precisely my point - I have no good alternative... which is why I'm not blogging about this as a connoisseur on my own blog and instead I'm asking on a forum where there are experts on the matter.
And regarding whether that's a big or small change - I disagree. I think it depends on the company you work for, how well organized projects are, what sort of software you as a developer are used to working on and in this particular case, whether your database is sharded or not, because if it weren't sharded, something like I described would be a very minor change. Sharding is what would make it a bigger / more complicated change.
And furthermore, this highlights the fact that 'sharding' is a complex beast whose advantages and disadvantages should be very carefully considered before implementing it. Unfortunately many software companies looking for a performance improvement are turning to it as a 'silver bullet', not fully understanding what implications it might have for future development on the site. You may end up trading site speed for development speed and maintainability.
Annie, The only reason to go to sharding is if you have no other choice, usually for perf / scale reasons. Sharding makes a LOT of things a LOT harder, and it is only worth going there if you actually need that
@Oren
Well put - I totally agree with your last statement. Thanks for the discussion :)
to Annie's point; i often struggle with this very topic. i can easily reason about the first approach (where content is in a separate database) primarily because i can have configuration settings that tell me where to get "images", "comments" and "users." granted... performance degrades but, it feels like there are patterns and implementations out there to help with scaling issues and performance (e.g. caching).
when it comes to true "true sharding", however, i miss how my code and implementation reasons about which database or data store to query. something has to tell my code that if the user has a last name between "A-D" connect to this database, etc...
does it really come down to having some "manager" (for the lack of a better word) that knows/learns which database or data store to connect to? are we really talking about potentially connecting to a database and not finding a user or customer because the sharding rules have changed mid stream (e.g. a database has changed from A-H to A-C)?
so while i love the approach and think that "true sharding" has many benefits, it is being able to reason about how to implement a solution that causes many to fall back to separate database for each "concern"
Meisinger, What you refer to as the "manager" is the sharding function. It can either be a true function (compute based on input) or a function retrieving data from a common source. Depending on your sharding strategy, sharding may change, and the sharding strategy needs to handle that. For most cases, you only change the sharding function when you deploy a new version to production, because it is such a big thing
Comment preview