Changes

Jump to navigation Jump to search
36 bytes added ,  07:43, 18 April 2020
** Speculatively, we can consider the following motivation for the change:
*** The old spin lock cannot atomically update both tickets with a single write. Thus, it is required to do two loops (one to update the current ticket, one to check if the obtained ticket is the active and the lock is taken).
*** The new spin lock can atomically update both tickets with a single write. Thus, in the case where the lock is not heldby another core when it is acquired, the new spin lock only has to do one atomic loop.
*** From this we can observe that the new spin lock is likely more performant under low contention (where it is expected that the lock is not held), however its downsides are potential false sharing (due to not owning the cache line). It is also probably better when at the start of a cache line and the locked data exists entirely within that cache line.
*** Most kernel locks are expected to be relatively uncontended (and there aren't really cases where two locks are in the same cache line so false sharing isn't such a problem), and thus the switch to the new ARM reference manual style lock should lead to an overall performance upgrade.

Navigation menu