A Comparator having a side-effect while being called a gazillion times? Reminds me of the guys who did Database queries in Command.CanExecute implementations...
Never mind the number of times it's called, but still wondering why a comparison would have to do a write - it does sound like something that shouldn't have a side-effect.
You have some sort of a list implementation which has a method to find a value which is greater then or equal to the given value and this FindGreaterOrEqual method performs some sort of an insert which gets then written to storage/disk.
Since I have no idea how RavenDB works I can only say that you should remove this insert and write from this method in order to speed it up :)
Your Raven.Storage.Impl.StorageWriter+<WriteAsync> is the bottleneck within the scope of your code.
The calls to this method chain: FindGreaterOrEqual() -> Insert() -> Apply() -> MoveNext() are common to all 3 levels of the trace stack that you printed out there. The middle method-chain with 11,690 calls is the "expensive" one
The (RunInternal) is where the time is being eaten - I assume that is the background worker thread that is executing the delegate/lambda constructed by Insert/Apply/MoveNext, while the implementation is setting up for receiving the next write in parallel?
If the times are cpu-time, I would say you have a thread synch / context switching overhead in there somewhere - depends on how many threads you threw at your test.
If the times shown are wall-clock time, then you might have a locking / blocking issue... - not enough threads in the threadpool?
I think the problem is on ExecutionContext.RunInternal, which is restoring context from original thread, and this is probably unnecessary on this scenario (hence using ConfigureAwait(false) could help)?
Doh - same mistake as Frank. (Used to assuming a call tree starts from the caller).
Ok - assuming no multi-threading gives a good clue...
Why is the Compare called 68,000,000 times? Is the implementation of the List comparator checking every value in the list (like a table scan?). Maybe that needs the equivalent of an index or an more efficient algorithm (binary-chop or something faster)?
The comparer being used a CaseInsensitiveComparer. Assuming your keys have to be strings... Would an OrdinalIgnoreCase be faster (not sure on your key constraints...)
I'd assume there's some sort of O(n2) algorithm going on here. FindGreaterOrEqual is called 11691 times, where the other calls are made ~68million times ~ (11692^2)/2 ~ the handshake equation. Every value is being compared against every other value.
I think that even though it's being called 68 million times, 5 secs spent in CaseInsensitiveComparator(..) is high. Plus it's always best to try and optimise the most expansive thing first.
Assuming it's just comparing 2 arrays or chars, they're might be a faster way?
So basically, you're not going to get anywhere fixing and other performance issues until you use a sorting algorithm that's more efficient than bubble sort. That 68Million number needs to come way way down.
How about a nice Shell sort?
A proper skip list would only need log(n) comparisons to insert an item. So I would conclude your skip list is actually a linked list. (And you're importing something that's already in order.)
Aside from the other suggestions....better sort algorithms, rewriting to reduce calls to a method, etc....I had another though:
One thing that sticks out to me is that when the "SkipList+Next" method is called almost the same number of times as Compare (which, if that's an On^2 comparison (naïve), that makes sense), the time taken accounts for ~50% of the comparison time. . What this could possibly mean is that the comparison itself may not necessarily be the slow part (aside from perhaps a less-than optimal implementation as others have pointed out).
Initial thoughts: "+Next" is doing standard linked-list style "pointer chasing" which is possibly killing any change the processor has of utilizing L1/L2 + L3 cache lines effectively (each point is a random point in memory...in other words, not contiguous and more than likely not pre-loaded into CPU cache). So, 68+million calls that are mostly cache-misses are not going to run quick at all. laugh
I haven't gone back through the posts to see the details of the Managed LevelDB SkipList implementation, but if each "level" is not stored as a contiguous array, may be one spot to fix. That way, the CPU could load a much larger range into cache at a time, and - depending on the "level" - perhaps get very high cache hit rates from the CPU.
Or...none of that is right, and I've bumped my head somewhere!!! laugh
5s for 68 million calls
4s for 11K calls
2s for 68 million calls
The time is spent in ExecutionContext.RunInternal which looks like the write was underway but it was not yet completed hence the I guess wait time for the writer thread/s to complete the write operation.
You do sort on stuff which is about to be written which could lead to wrong or unexpected results but you do certainly wait much longer for your search results while stuff is beeing written.
To solve the issues you should exclude from the sort any writes which are not yet completed to get your speed back.
Ups I did misread the call stack as well. Seems to be the issue you did post a few days ago where the binary search was degenerate which is causing the 68m compares.
There are three outcomes from calling FindGreaterOrEqual
Comparer is called 68millions times in 5s
Next is called 68 million times in 2s
Nothing else is called 11million times in 4s
What could #3 be doing that is worse performance than the first 2?
If #3 is inserting in the front or back is slow can you keep track of first and last and run that super fast comparerat the beginning of FindGreaterOrEqual
Comment preview
Comments have been closed on this topic.
Markdown formatting
ESC to close
Markdown turns plain text formatting into fancy HTML formatting.
Phrase Emphasis
*italic* **bold**
_italic_ __bold__
Links
Inline:
An [example](http://url.com/ "Title")
Reference-style labels (titles are optional):
An [example][id]. Then, anywhere
else in the doc, define the link:
[id]: http://example.com/ "Title"
> Email-style angle brackets
> are used for blockquotes.
> > And, they can be nested.
> #### Headers in blockquotes
>
> * You can quote a list.
> * Etc.
Horizontal Rules
Three or more dashes or asterisks:
---
* * *
- - - -
Manual Line Breaks
End a line with two or more spaces:
Roses are red,
Violets are blue.
Fenced Code Blocks
Code blocks delimited by 3 or more backticks or tildas:
```
This is a preformatted
code block
```
Header IDs
Set the id of headings with {#<id>} at end of heading line:
## My Heading {#myheading}
Tables
Fruit |Color
---------|----------
Apples |Red
Pears |Green
Bananas |Yellow
Definition Lists
Term 1
: Definition 1
Term 2
: Definition 2
Footnotes
Body text with a footnote [^1]
[^1]: Footnote text here
Abbreviations
MDD <- will have title
*[MDD]: MarkdownDeep
FUTURE POSTS
Partial writes, IO_Uring and safety - one day from now
Configuration values & Escape hatches - 4 days from now
What happens when a sparse file allocation fails? - 6 days from now
NTFS has an emergency stash of disk space - 8 days from now
Challenge: Giving file system developer ulcer - 11 days from now
And 4 more posts are pending...
There are posts all the way to Feb 17, 2025
RECENT SERIES
Challenge
(77): 20 Jan 2025 - What does this code do?
Answer
(13): 22 Jan 2025 - What does this code do?
Comments
A Comparator having a side-effect while being called a gazillion times? Reminds me of the guys who did Database queries in Command.CanExecute implementations...
Frank, How much time does it take to call it once?
Never mind the number of times it's called, but still wondering why a comparison would have to do a write - it does sound like something that shouldn't have a side-effect.
You have some sort of a list implementation which has a method to find a value which is greater then or equal to the given value and this FindGreaterOrEqual method performs some sort of an insert which gets then written to storage/disk. Since I have no idea how RavenDB works I can only say that you should remove this insert and write from this method in order to speed it up :)
Frank, Where do you see comparison making a write?
all time is spend in internal methods
68 million calls in 5s? I'd say it's pretty fast.
FindGreaterOrEqual is called by all 'subroutines', this can probably be optimized so that it is called only once?
@Ayende D'oh, sorry, was reading the stack traces backwards (><)
Your Raven.Storage.Impl.StorageWriter+<WriteAsync> is the bottleneck within the scope of your code.
The calls to this method chain: FindGreaterOrEqual() -> Insert() -> Apply() -> MoveNext() are common to all 3 levels of the trace stack that you printed out there. The middle method-chain with 11,690 calls is the "expensive" one
The (RunInternal) is where the time is being eaten - I assume that is the background worker thread that is executing the delegate/lambda constructed by Insert/Apply/MoveNext, while the implementation is setting up for receiving the next write in parallel?
If the times are cpu-time, I would say you have a thread synch / context switching overhead in there somewhere - depends on how many threads you threw at your test.
If the times shown are wall-clock time, then you might have a locking / blocking issue... - not enough threads in the threadpool?
Neil, I think you are reading the stack trace backward.
Maybe it's context switching?
I think the problem is on ExecutionContext.RunInternal, which is restoring context from original thread, and this is probably unnecessary on this scenario (hence using ConfigureAwait(false) could help)?
Gleb, You can assume no multi threading actually taking place.
Andre, No, it isn't that.
Doh - same mistake as Frank. (Used to assuming a call tree starts from the caller).
Ok - assuming no multi-threading gives a good clue...
Why is the Compare called 68,000,000 times? Is the implementation of the List comparator checking every value in the list (like a table scan?). Maybe that needs the equivalent of an index or an more efficient algorithm (binary-chop or something faster)?
Scratch that last comment - I just saw that we are trying to find GreaterThanOrEqual to...
A sorted list would probably help more than index / algorithm
I give up :) Was typing before I check. A Skip List is already sorted :P
Last try...
The comparer being used a CaseInsensitiveComparer. Assuming your keys have to be strings... Would an OrdinalIgnoreCase be faster (not sure on your key constraints...)
That would approximately halve the cost
I'd assume there's some sort of O(n2) algorithm going on here. FindGreaterOrEqual is called 11691 times, where the other calls are made ~68million times ~ (11692^2)/2 ~ the handshake equation. Every value is being compared against every other value.
I think that even though it's being called 68 million times, 5 secs spent in CaseInsensitiveComparator(..) is high. Plus it's always best to try and optimise the most expansive thing first.
Assuming it's just comparing 2 arrays or chars, they're might be a faster way?
So basically, you're not going to get anywhere fixing and other performance issues until you use a sorting algorithm that's more efficient than bubble sort. That 68Million number needs to come way way down. How about a nice Shell sort?
Totally agree with the rest, it is needed to drop the number of comparisons.
Following up on what Jim T found:
A proper skip list would only need log(n) comparisons to insert an item. So I would conclude your skip list is actually a linked list. (And you're importing something that's already in order.)
Aside from the other suggestions....better sort algorithms, rewriting to reduce calls to a method, etc....I had another though:
One thing that sticks out to me is that when the "SkipList+Next" method is called almost the same number of times as Compare (which, if that's an On^2 comparison (naïve), that makes sense), the time taken accounts for ~50% of the comparison time. . What this could possibly mean is that the comparison itself may not necessarily be the slow part (aside from perhaps a less-than optimal implementation as others have pointed out).
Initial thoughts: "+Next" is doing standard linked-list style "pointer chasing" which is possibly killing any change the processor has of utilizing L1/L2 + L3 cache lines effectively (each point is a random point in memory...in other words, not contiguous and more than likely not pre-loaded into CPU cache). So, 68+million calls that are mostly cache-misses are not going to run quick at all. laugh
I haven't gone back through the posts to see the details of the Managed LevelDB SkipList implementation, but if each "level" is not stored as a contiguous array, may be one spot to fix. That way, the CPU could load a much larger range into cache at a time, and - depending on the "level" - perhaps get very high cache hit rates from the CPU.
Or...none of that is right, and I've bumped my head somewhere!!! laugh
5s for 68 million calls 4s for 11K calls 2s for 68 million calls The time is spent in ExecutionContext.RunInternal which looks like the write was underway but it was not yet completed hence the I guess wait time for the writer thread/s to complete the write operation.
You do sort on stuff which is about to be written which could lead to wrong or unexpected results but you do certainly wait much longer for your search results while stuff is beeing written.
To solve the issues you should exclude from the sort any writes which are not yet completed to get your speed back.
Skip lists are sorted and (hence) have logaritmic complexity, so I think there could be a problem in the implementation of FindGreaterOrEqual
Ups I did misread the call stack as well. Seems to be the issue you did post a few days ago where the binary search was degenerate which is causing the 68m compares.
There are three outcomes from calling FindGreaterOrEqual
What could #3 be doing that is worse performance than the first 2?
If #3 is inserting in the front or back is slow can you keep track of first and last and run that super fast comparerat the beginning of FindGreaterOrEqual
Comment preview