ChallengeThe race condition in the TCP stack

time to read 3 min | 463 words

Occasionally, one of our tests hangs. Everything seems to be honky dory, but it just freezes and does not complete. This is a new piece of code, and thus is it suspicious unless proven otherwise, but an exhaustive review of it looked fine. It took over two days of effort to narrow it down, but eventually we managed to point the finger directly at this line of code:

image

In certain cases, this line would simply not read anything on the server. Even though the client most definitely sent the data. Now, given that TCP is being used, dropped packets might be expected. But we are actually testing on the loopback device, which I expect to be reliable.

We spent a lot of time investigating this, ending up with a very high degree of certainty that the problem was in the TCP stack somewhere. Somehow, on the loopback device, we were losing packets. Not always, and not consistently, but we were absolutely losing packets, which led the server to wait indefinitely for the client to send the message it already did.

Now, I’m as arrogant as the next developer, but even I don’t think I found that big a bug in TCP. I’m pretty sure that if it was this broken, I would have known about it. Beside, TCP is supposed to retransmit lost packets, so even if there were lost packets on the loopback device, we should have recovered from that.

Trying to figure out what was going on there sucked. It is hard to watch packets on the loopback device in WireShark, and tracing just told me that a message is sent from the client to the server, but the server never got it.

But we continued, and we ended up with a small reproduction of the issue. Here is the code, and my comments are below:

This code is pretty simple. It starts a TCP server, and listens to it, and then it reads and writes to the client. Nothing much here, I think you’ll agree.

If you run it, however, it will mostly work, except that sometimes (anywhere between 10 runs and 500 runs on my machine), it will just hang. I’ll save you some time and let you know that there are no dropped packets, TCP is working properly in this case. But the code just doesn’t. What is frustrating is that it is mostly working, it takes a lot of work to actually get it to fail.

Can you spot the bug? I’ll continue discussion of this in my next post.

More posts in "Challenge" series:

  1. (03 Feb 2025) Giving file system developer ulcer
  2. (20 Jan 2025) What does this code do?
  3. (01 Jul 2024) Efficient snapshotable state
  4. (13 Oct 2023) Fastest node selection metastable error state–answer
  5. (12 Oct 2023) Fastest node selection metastable error state
  6. (19 Sep 2023) Spot the bug
  7. (04 Jan 2023) what does this code print?
  8. (14 Dec 2022) What does this code print?
  9. (01 Jul 2022) Find the stack smash bug… – answer
  10. (30 Jun 2022) Find the stack smash bug…
  11. (03 Jun 2022) Spot the data corruption
  12. (06 May 2022) Spot the optimization–solution
  13. (05 May 2022) Spot the optimization
  14. (06 Apr 2022) Why is this code broken?
  15. (16 Dec 2021) Find the slow down–answer
  16. (15 Dec 2021) Find the slow down
  17. (03 Nov 2021) The code review bug that gives me nightmares–The fix
  18. (02 Nov 2021) The code review bug that gives me nightmares–the issue
  19. (01 Nov 2021) The code review bug that gives me nightmares
  20. (16 Jun 2021) Detecting livelihood in a distributed cluster
  21. (21 Apr 2020) Generate matching shard id–answer
  22. (20 Apr 2020) Generate matching shard id
  23. (02 Jan 2020) Spot the bug in the stream
  24. (28 Sep 2018) The loop that leaks–Answer
  25. (27 Sep 2018) The loop that leaks
  26. (03 Apr 2018) The invisible concurrency bug–Answer
  27. (02 Apr 2018) The invisible concurrency bug
  28. (31 Jan 2018) Find the bug in the fix–answer
  29. (30 Jan 2018) Find the bug in the fix
  30. (19 Jan 2017) What does this code do?
  31. (26 Jul 2016) The race condition in the TCP stack, answer
  32. (25 Jul 2016) The race condition in the TCP stack
  33. (28 Apr 2015) What is the meaning of this change?
  34. (26 Sep 2013) Spot the bug
  35. (27 May 2013) The problem of locking down tasks…
  36. (17 Oct 2011) Minimum number of round trips
  37. (23 Aug 2011) Recent Comments with Future Posts
  38. (02 Aug 2011) Modifying execution approaches
  39. (29 Apr 2011) Stop the leaks
  40. (23 Dec 2010) This code should never hit production
  41. (17 Dec 2010) Your own ThreadLocal
  42. (03 Dec 2010) Querying relative information with RavenDB
  43. (29 Jun 2010) Find the bug
  44. (23 Jun 2010) Dynamically dynamic
  45. (28 Apr 2010) What killed the application?
  46. (19 Mar 2010) What does this code do?
  47. (04 Mar 2010) Robust enumeration over external code
  48. (16 Feb 2010) Premature optimization, and all of that…
  49. (12 Feb 2010) Efficient querying
  50. (10 Feb 2010) Find the resource leak
  51. (21 Oct 2009) Can you spot the bug?
  52. (18 Oct 2009) Why is this wrong?
  53. (17 Oct 2009) Write the check in comment
  54. (15 Sep 2009) NH Prof Exporting Reports
  55. (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
  56. (01 Sep 2009) Why isn’t select broken?
  57. (06 Aug 2009) Find the bug fixes
  58. (26 May 2009) Find the bug
  59. (14 May 2009) multi threaded test failure
  60. (11 May 2009) The regex that doesn’t match
  61. (24 Mar 2009) probability based selection
  62. (13 Mar 2009) C# Rewriting
  63. (18 Feb 2009) write a self extracting program
  64. (04 Sep 2008) Don't stop with the first DSL abstraction
  65. (02 Aug 2008) What is the problem?
  66. (28 Jul 2008) What does this code do?
  67. (26 Jul 2008) Find the bug fix
  68. (05 Jul 2008) Find the deadlock
  69. (03 Jul 2008) Find the bug
  70. (02 Jul 2008) What is wrong with this code
  71. (05 Jun 2008) why did the tests fail?
  72. (27 May 2008) Striving for better syntax
  73. (13 Apr 2008) calling generics without the generic type
  74. (12 Apr 2008) The directory tree
  75. (24 Mar 2008) Find the version
  76. (21 Jan 2008) Strongly typing weakly typed code
  77. (28 Jun 2007) Windsor Null Object Dependency Facility