Making code fasterThat pesky dictionary

time to read 2 min | 343 words

It is a fact of life, if you are looking at a high performance code, a dictionary is probably going to be one of those things that will be painful. Let us see how this look like in our current program:

image

Are you kidding me?! Here I am worrying about every little bit and byte to try to get the most performance out of the system, and Dictionary is eating almost 50% of my performance?

Let us look more deeply into this:

image

So that is about 3 μs, which is pretty fast. But it isn’t fast enough.

Now, know that there are about 200,000 unique values in the dictionary (look at the Insert calls in the profiler output), and we have some knowledge about the problem space.

The id that we use has 8 characters, so at most, we can have a hundred million ids. An array of that size would be roughly 762 MB in size, so that is doable. But we also can be fairly certain that the ids are generated in some sequential manner, so there is a strong likelihood that we don’t need all of this space.

So I wrote the following function:

Changed the stats to start with an array of 256 longs, and run it, the results are nice.

587 ms and allocated 2,576 kb with peak working set of 296,484 kb

And our costs look like:

image

This is single threaded, of course, and it is faster than all the previous multi threaded versions we had before.

More posts in "Making code faster" series:

  1. (24 Nov 2016) Micro optimizations and parallel work
  2. (23 Nov 2016) Specialization make it faster still
  3. (22 Nov 2016) That pesky dictionary
  4. (21 Nov 2016) Streamlining the output
  5. (18 Nov 2016) Pulling out the profiler
  6. (17 Nov 2016) I like my performance unsafely
  7. (16 Nov 2016) Going down the I/O chute
  8. (15 Nov 2016) Starting from scratch
  9. (14 Nov 2016) The obvious costs
  10. (11 Nov 2016) The interview question