Jump to content

Lock convoy

From Wikipedia, the free encyclopedia

In computer science, a lock convoy is a performance problem that can occur when using locks for concurrency control in a multithreaded application.

A lock convoy occurs when multiple threads of equal priority contend repeatedly for the same lock.[1][2] Unlike deadlock and livelock situations, the threads in a lock convoy do progress; however, each time a thread attempts to acquire the lock and fails, it relinquishes the remainder of its scheduling quantum and forces a context switch. The overhead of repeated context switches and underutilization of scheduling quanta degrade overall performance.

A lock convoy behaves similarly to car convoys ("wide moving jam") forming and dissipating on highways suffering traffic congestion[3]: when more cars drive on the road than the road can sustain, one car eventually has to stop. Cars coming right after it will also stop one after the other; this forms the convoy. Then when the first cars in the convoy can move again, new cars are still stopping at the rear of the convoy. While the cars at the front of the convoy can make progress, all the cars in the convoy have to wait in queue before being able to make progress again.

Lock convoys often occur when concurrency control primitives such as locks serialize access to a commonly used resource, such as a memory heap or a thread pool. They can sometimes be addressed by using non-locking alternatives such as lock-free algorithms or by altering the relative priorities of the contending threads.

See also

[edit]

References

[edit]
  1. ^ Silberschatz, Abraham (2013). Operating System Concepts. John Wiley & Sons Inc. ISBN 978-1118129388.
  2. ^ Blasgen, Mike; Gray, Jim; Mitoma, Mike; Blasgen, Mike; Mitoma, Like (1979). "The convoy phenomenon". Operating Systems Review. 13 (2): 20–25. CiteSeerX 10.1.1.646.921. doi:10.1145/850657.850659. S2CID 40305779.
  3. ^ Kilian, Dave (November 2022). "A Complete Guide to Lock Convoys". davekilian.com. Retrieved 7 January 2025.