From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754442AbbKLMb3 (ORCPT ); Thu, 12 Nov 2015 07:31:29 -0500 Received: from casper.infradead.org ([85.118.1.10]:40798 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752487AbbKLMb2 (ORCPT ); Thu, 12 Nov 2015 07:31:28 -0500 Date: Thu, 12 Nov 2015 13:31:23 +0100 From: Peter Zijlstra To: ralf@linux-mips.org, ddaney@caviumnetworks.com Cc: linux-kernel@vger.kernel.org, Paul McKenney , Will Deacon , torvalds@linux-foundation.org, boqun.feng@gmail.com Subject: [RFC][PATCH] mips: Fix arch_spin_unlock() Message-ID: <20151112123123.GZ17308@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi I think the MIPS arch_spin_unlock() is borken. spin_unlock() must have RELEASE semantics, these require that no LOADs nor STOREs leak out from the critical section. >>From what I know MIPS has a relaxed memory model which allows reads to pass stores, and as implemented arch_spin_unlock() only issues a wmb which doesn't order prior reads vs later stores. Therefore upgrade the wmb() to smp_mb(). (Also, why the unconditional wmb, as opposed to smp_wmb() ?) Maybe-Signed-off-by: Peter Zijlstra (Intel) --- diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h index 40196bebe849..b2ca13f06152 100644 --- a/arch/mips/include/asm/spinlock.h +++ b/arch/mips/include/asm/spinlock.h @@ -140,7 +140,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_unlock(arch_spinlock_t *lock) { unsigned int serving_now = lock->h.serving_now + 1; - wmb(); + smp_mb(); lock->h.serving_now = (u16)serving_now; nudge_writes(); }