* [merged] ipc-semc-avoid-using-spin_unlock_wait.patch removed from -mm tree
@ 2017-02-28 20:50 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2017-02-28 20:50 UTC (permalink / raw)
To: manfred, 1vier1, dave, felixh, hpa, mingo, peterz, tglx,
xiaolong.ye, mm-commits
The patch titled
Subject: ipc/sem.c: avoid using spin_unlock_wait()
has been removed from the -mm tree. Its filename was
ipc-semc-avoid-using-spin_unlock_wait.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Manfred Spraul <manfred@colorfullife.com>
Subject: ipc/sem.c: avoid using spin_unlock_wait()
a) The ACQUIRE in spin_lock() applies to the read, not to the store, at
least for powerpc. This forces to add a smp_mb() into the fast path.
b) The memory barrier provided by spin_unlock_wait() is right now arch
dependent.
Therefore: Use spin_lock()/spin_unlock() instead of spin_unlock_wait().
Advantage: faster single op semop calls(), observed +8.9% on
x86. (the other solution would be arch dependencies in ipc/sem).
Disadvantage: slower complex op semop calls, if (and only if)
there are no sleeping operations.
The next patch adds hysteresis, this further reduces the
probability that the slow path is used.
Link: http://lkml.kernel.org/r/1476851896-3590-2-git-send-email-manfred@colorfullife.com
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: <1vier1@web.de>
Cc: kernel test robot <xiaolong.ye@intel.com>
Cc: <felixh@informatik.uni-bremen.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
ipc/sem.c | 25 +++----------------------
1 file changed, 3 insertions(+), 22 deletions(-)
diff -puN ipc/sem.c~ipc-semc-avoid-using-spin_unlock_wait ipc/sem.c
--- a/ipc/sem.c~ipc-semc-avoid-using-spin_unlock_wait
+++ a/ipc/sem.c
@@ -278,24 +278,13 @@ static void complexmode_enter(struct sem
return;
}
- /* We need a full barrier after seting complex_mode:
- * The write to complex_mode must be visible
- * before we read the first sem->lock spinlock state.
- */
- smp_store_mb(sma->complex_mode, true);
+ sma->complex_mode = true;
for (i = 0; i < sma->sem_nsems; i++) {
sem = sma->sem_base + i;
- spin_unlock_wait(&sem->lock);
+ spin_lock(&sem->lock);
+ spin_unlock(&sem->lock);
}
- /*
- * spin_unlock_wait() is not a memory barriers, it is only a
- * control barrier. The code must pair with spin_unlock(&sem->lock),
- * thus just the control barrier is insufficient.
- *
- * smp_rmb() is sufficient, as writes cannot pass the control barrier.
- */
- smp_rmb();
}
/*
@@ -361,14 +350,6 @@ static inline int sem_lock(struct sem_ar
*/
spin_lock(&sem->lock);
- /*
- * See 51d7d5205d33
- * ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
- * A full barrier is required: the write of sem->lock
- * must be visible before the read is executed
- */
- smp_mb();
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-02-28 20:51 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-28 20:50 [merged] ipc-semc-avoid-using-spin_unlock_wait.patch removed from -mm tree akpm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.