The importance of a data formatPart I – Current state problems

time to read 3 min | 459 words

JSON is a really simple format. It make it very easy to work with it, interchange it, read it, etc. Here is the full JSON format definition:

  • object = {} | { members }
  • members = pair | pair , members
  • pair = string : value
  • array = [] | [ elements ]
  • elements = value | value , elements
  • value = string | number | object | array | true | false | null

So far, so good. But JSON also has a few major issues. In particular, JSON require that you’ll read and parse the entire document (at least until the part you actually care about) before you can do something with it. Reading JSON documents into memory and actually working with them means loading and parsing the whole thing, and typically require the use of dictionaries to get fast access to the data. Let us look at this typical document:

{
  "firstName": "John",
  "lastName": "Smith",
  "address": {
    "state": "NY",
    "postalCode": "10021-3100"
  },
  "children": [{"firstName": "Alice"}]
}

How would this look in memory after parsing?

  • Dictionary (root)
    • firstName –> John
    • lastName –> Smith
    • address –> Dictionary
      • state –> NY
      • postalCode –> 10021-3100
    • children –> array
      • [0] –> Dictionary
        • firstName –> Alice

So that is three dictionaries and an array (even assuming we ignore all the strings). Using Netwonsoft.Json, the above document takes 3,840 bytes in managed memory (measured using objsize in WinDBG). The size of the document is 126 bytes as text. The reason for the different sizes is dictionaries. Here is 320 bytes allocation:

new Dictionary<string,Object>{ {“test”, “tube”} };

And as you can see, this adds up fast. For a database that mostly deals with JSON data, this is a pretty important factor. Controlling memory is a very important aspect of the work of a database. And the JSON is really inefficient in this regard. For example, imagine that we want to index documents by the names of the children. That is going to force us to parse the entire document, incurring a high penalty in both CPU and memory. We need a better internal format for the data.

In my next post, I’ll go into details on this format and what constraints we are working under.

More posts in "The importance of a data format" series:

  1. (25 Jan 2016) Part VII–Final benchmarks
  2. (15 Jan 2016) Part VI – When two orders of magnitude aren't enough
  3. (13 Jan 2016) Part V – The end result
  4. (12 Jan 2016) Part IV – Benchmarking the solution
  5. (11 Jan 2016) Part III – The solution
  6. (08 Jan 2016) Part II–The environment matters
  7. (07 Jan 2016) Part I – Current state problems