From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx2.suse.de ([195.135.220.15]:39862 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932469AbcI2JHD (ORCPT ); Thu, 29 Sep 2016 05:07:03 -0400 From: Jiri Slaby To: stable@vger.kernel.org Cc: Will Deacon , Peter Zijlstra , Catalin Marinas , Jiri Slaby Subject: [patch added to 3.12-stable] arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb() Date: Thu, 29 Sep 2016 11:06:24 +0200 Message-Id: <20160929090654.27405-13-jslaby@suse.cz> In-Reply-To: <20160929090654.27405-1-jslaby@suse.cz> References: <20160929090654.27405-1-jslaby@suse.cz> Sender: stable-owner@vger.kernel.org List-ID: From: Will Deacon This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream. smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation to a full barrier, such that prior stores are ordered with respect to loads and stores occuring inside the critical section. Unfortunately, the core code defines the barrier as smp_wmb(), which is insufficient to provide the required ordering guarantees when used in conjunction with our load-acquire-based spinlock implementation. This patch overrides the arm64 definition of smp_mb__before_spinlock() to map to a full smp_mb(). Cc: Peter Zijlstra Reported-by: Alan Stern Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas Signed-off-by: Jiri Slaby --- arch/arm64/include/asm/spinlock.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 0defa0728a9b..c3cab6f87de4 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -200,4 +200,14 @@ static inline int arch_read_trylock(arch_rwlock_t *rw) #define arch_read_relax(lock) cpu_relax() #define arch_write_relax(lock) cpu_relax() +/* + * Accesses appearing in program order before a spin_lock() operation + * can be reordered with accesses inside the critical section, by virtue + * of arch_spin_lock being constructed using acquire semantics. + * + * In cases where this is problematic (e.g. try_to_wake_up), an + * smp_mb__before_spinlock() can restore the required ordering. + */ +#define smp_mb__before_spinlock() smp_mb() + #endif /* __ASM_SPINLOCK_H */ -- 2.10.0