World’s Smallest No SQL DatabaseConcurrency
I am pretty sure that it would surprise you, but the World’s Smallest No SQL Database has a well defined concurrency model. Basically, it is using Last Write Wins. And we are safe from any concurrency issues. That is pretty much it, right?
Well, not really. In a real world system, you actually need to do a lot more with concurrency. Some obvious examples:
- Create this value only if it doesn’t exists already.
- Update this value only if it didn’t change since I last saw it.
Implementing those is actually going to be pretty simple. All you need to do is to have a metadata field, version, that is incremented on every change. Here is the change that we need to make:
1: public class Data2: {
3: public byte[] Value;4: public int Version;5: }
6:
7: static readonly ConcurrentDictionary<string, Data> data =8: new ConcurrentDictionary<string, Data>(StringComparer.InvariantCultureIgnoreCase);9:
10: public HttpResponseMessage Get(string key)11: {
12: Data value;13: if(data.TryGetValue(key, out value) == false)14: return new HttpResponseMessage(HttpStatusCode.NotFound);15:
16: return new HttpResponseMessage17: {
18: Headers = { "Version", value.Version },19: Content = new ByteArrayContent(value.Value)20: };
21: }
22:
23: public void Put(string key, [FromBody]byte[] value, int version)24: {
25: data.AddOrUpdate(key, () =>
26: { // create27: if(version != 0)28: throw new ConcurrencyException();29: return new Data{ Value = value, Version = 1 };30: }, (_, prev) =>
31: { // update32: if(prev.Version != version)33: throw new ConcurrencyException();34: return new Data{ Value = value, Version = prev.Version +1 };35: });
36: }
As you can see, it merely doubled the amount of code that we had to write, but it is pretty obvious how it works. RavenDB actually uses something very similar to that for concurrency control for writes, although the RavenDB ETag mechanism is alos doing a lot more.
But the version system that we have above is actually not enough, it only handle concurrency control for updates. What about concurrency controls for reads?
In particular, how are we going to handle non repeatable reads or phantom reads?
- Non repeatable reads happen when you are reading a value, it is then deleted, and when you try to read it again, it is gone.
- Phantom read is the other way around, first you tried, but didn’t find anything, then it was created, and you read it again and find it.
This is actually interesting, because you only care about those for the duration of a single operation / transaction / session. As it stand now, we actually have no way to handle either issue. This can lead to… interesting bugs that only happen under very specific scenarios.
With RavenDB, we actually handle both cases. In a session lifetime, you are guaranteed that if you saw a document, you’ll continue to see this document until the end of the session, which deals with the issue of non repeatable read. Conversely, if you didn’t see a document, you will continue to not see it until the session is closed. This is done for Load, queries are a little bit different.
Another aspect of concurrency that we need to deal with is Locking. Sometimes a user has a really good reason why they want to lock a record for a period of time. This is pretty much the only way to handle “checking out” of a record in a scenario where you have to multiple users wanting to make changes to a record concurrently. Locks can be Write Or ReadWrite locks. A Write lock allows users to read the data, but prevent them from changing that. When used in practice, this is usually going to immediately fail an operation, rather than make you wait for it.
The reasoning behind immediate fail for write is that if you encountered a record with a write lock, it means that it was either already written to or is about to be written to. At that case, your write is going to be operating on stale data, so we might was well fail you immediately. For ReadWrite locks, the situation is a bit different. In this case, we want to also prevent readers from moving on. This is usually done to ensure consistent state system wise, and basically, any operation on the record would have to wait until the lock is removed.
In practice,ReadWrite locks can cause a lot of issues. The moment that you have people start placing locks, you have to deal with lock expiration, manual unlocking, abandoned lock detection, lock maintenance, etc. About the only thing that they are good for is to allow the user to make a set of changes and present them as one unit, if we don’t have better transaction support. But I’ll discuss that in another post. In the meantime, just keep in mind that from my point of view, ReadWrite locks are pretty useless all around.
More posts in "World’s Smallest No SQL Database" series:
- (16 Aug 2013) Persistent data
- (22 Jul 2013) Persistent transaction logs
- (18 Jul 2013) ACID & Transactions
- (11 Jul 2013) Concurrency
- (10 Jul 2013) Memory
- (08 Jul 2013) The Devil Is In The Details
Comments
Great series! Love it.
Loving the series. Very instructive, but I would like to see how you test it and make sure it does what it says it does. Thank you!
The ConcurrentDictionary strikes again! Multiple concurrent put-update calls can happen at the same time for the same key because the user delegate is not being called under a lock. This can cause different data to be read for the same version.
@tobi,
If that happens, the ConcurrentDictionary will call the update delegate again and again until it can finally replace the _prev_ you got with the updated value you returned.
And unless your update delegate has side effects, no one will be any wiser.
Also I'm not sure what you mean with "This can cause different data to be read for the same version." I don't see how that could happen with Ayende's code.
Tobi, To my knowledge, only one of those updates will actually be made visible. So you won't see different data for the same version.
great, apparently both underscores and asterisks make your text italic.
Let's see %if% I %%can%% $find$ $$a$$ way #to# ##make## text __bold__.
Hmm, maybe Tobi is on to something ...
I think its possible to get to the point that two threads try to update the same version. One of them will win but the other will never know it has lost. Ie no concurrencyException is thrown.
// Ryan
Ryan, The calling code doesn't actually care. Even if the value is there, it can't assume that it will stay there (another request may come in).
Hmm, are we on the same page?
I am talking about two threads trying to update with the same version. One of them should/must get the concurrencyException but I think it's possible, due to the impl of concurrentdictionary, both think they succeeded.
// Ryan
No, that won't happen. The CD will take care or retrying the second one.
I did some testing and can confirm the update will work indeed! Seems the impl of concurrentdictionary has some versioning bookkeeping too :)
However, when two threads want to insert the same key at the precise same time, one of them will win and the other will not know his insert has been overwritten. I do not see a way how to fix that since the dictionary will not expose the data of the other thread yet ...
// Ryan
Take a look at AddOrUpdate in Reflector, it looks like this:
When it tries an update (the inner while loop) it will only succeeded if another Update hasn't happened in the mean time as it uses TryUpdate(.., local3) to write it back at the end.
For the thread that fails, the UpdateFunc will then get called a 2nd time and the code that Ayende has ("if(prev.Version != version)") will be triggered as the current value (prev in this case) will be different.
Ryan,
I decompiled CD.AddOrUpdate and this is basically what happens:
do { while (!this.TryGetValue(key, out comparisonValue)) { // attempt to add, and return if successful } newValue = updateValueFactory(key, comparisonValue); } while (!this.TryUpdate(key, newValue, comparisonValue));
Strangely the remarks of this method only mention that addValueFactory will be called as many times as needed.
sigh defeated by the blog formatting again. Ayende, could you please implement a preview function?
Looking at Matt's post, it's funny that Reflector and dotPeek (which I used) apparently decompiled the loops the other way around. Or maybe the implementation changed in between versions (I looked at v4.0.30319).
Matt & Patrick,
Yup, that behavior was exposed by my tests. The delegates are called for a second time when needed.
Ayende's code works as expected with updates. However with inserts there is no way to determine when to throw a concurrentyException because of a second run of the delegate.
// Ryan
I take that back, inserts works as good as updates. My tests for inserts had a bug ...
// Ryan
Comment preview