From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754459Ab3JBWio (ORCPT ); Wed, 2 Oct 2013 18:38:44 -0400 Received: from mga09.intel.com ([134.134.136.24]:21931 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754404Ab3JBWin (ORCPT ); Wed, 2 Oct 2013 18:38:43 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.90,1022,1371106800"; d="scan'208";a="386975902" Subject: [PATCH v8 7/9] MCS Lock: Barrier corrections From: Tim Chen To: Ingo Molnar , Andrew Morton Cc: Linus Torvalds , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , "Paul E.McKenney" , Tim Chen , Jason Low , Waiman Long , linux-kernel@vger.kernel.org, linux-mm In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Date: Wed, 02 Oct 2013 15:38:38 -0700 Message-ID: <1380753518.11046.89.camel@schen9-DESK> Mime-Version: 1.0 X-Mailer: Evolution 2.32.3 (2.32.3-1.fc14) Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch corrects the way memory barriers are used in the MCS lock and removes ones that are not needed. Also add comments on all barriers. Signed-off-by: Jason Low --- include/linux/mcs_spinlock.h | 13 +++++++++++-- 1 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h index 96f14299..93d445d 100644 --- a/include/linux/mcs_spinlock.h +++ b/include/linux/mcs_spinlock.h @@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) node->locked = 0; node->next = NULL; + /* xchg() provides a memory barrier */ prev = xchg(lock, node); if (likely(prev == NULL)) { /* Lock acquired */ return; } ACCESS_ONCE(prev->next) = node; - smp_wmb(); /* Wait until the lock holder passes the lock down */ while (!ACCESS_ONCE(node->locked)) arch_mutex_cpu_relax(); + + /* Make sure subsequent operations happen after the lock is acquired */ + smp_rmb(); } /* @@ -58,6 +61,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod if (likely(!next)) { /* + * cmpxchg() provides a memory barrier. * Release the lock by setting it to NULL */ if (likely(cmpxchg(lock, node, NULL) == node)) @@ -65,9 +69,14 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod /* Wait until the next pointer is set */ while (!(next = ACCESS_ONCE(node->next))) arch_mutex_cpu_relax(); + } else { + /* + * Make sure all operations within the critical section + * happen before the lock is released. + */ + smp_wmb(); } ACCESS_ONCE(next->locked) = 1; - smp_wmb(); } #endif /* __LINUX_MCS_SPINLOCK_H */ -- 1.7.4.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com [209.85.220.50]) by kanga.kvack.org (Postfix) with ESMTP id D97BA6B003D for ; Wed, 2 Oct 2013 18:39:00 -0400 (EDT) Received: by mail-pa0-f50.google.com with SMTP id fb1so1689106pad.9 for ; Wed, 02 Oct 2013 15:39:00 -0700 (PDT) Subject: [PATCH v8 7/9] MCS Lock: Barrier corrections From: Tim Chen In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Date: Wed, 02 Oct 2013 15:38:38 -0700 Message-ID: <1380753518.11046.89.camel@schen9-DESK> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Ingo Molnar , Andrew Morton Cc: Linus Torvalds , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , "Paul E.McKenney" , Tim Chen , Jason Low , Waiman Long , linux-kernel@vger.kernel.org, linux-mm This patch corrects the way memory barriers are used in the MCS lock and removes ones that are not needed. Also add comments on all barriers. Signed-off-by: Jason Low --- include/linux/mcs_spinlock.h | 13 +++++++++++-- 1 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h index 96f14299..93d445d 100644 --- a/include/linux/mcs_spinlock.h +++ b/include/linux/mcs_spinlock.h @@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) node->locked = 0; node->next = NULL; + /* xchg() provides a memory barrier */ prev = xchg(lock, node); if (likely(prev == NULL)) { /* Lock acquired */ return; } ACCESS_ONCE(prev->next) = node; - smp_wmb(); /* Wait until the lock holder passes the lock down */ while (!ACCESS_ONCE(node->locked)) arch_mutex_cpu_relax(); + + /* Make sure subsequent operations happen after the lock is acquired */ + smp_rmb(); } /* @@ -58,6 +61,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod if (likely(!next)) { /* + * cmpxchg() provides a memory barrier. * Release the lock by setting it to NULL */ if (likely(cmpxchg(lock, node, NULL) == node)) @@ -65,9 +69,14 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod /* Wait until the next pointer is set */ while (!(next = ACCESS_ONCE(node->next))) arch_mutex_cpu_relax(); + } else { + /* + * Make sure all operations within the critical section + * happen before the lock is released. + */ + smp_wmb(); } ACCESS_ONCE(next->locked) = 1; - smp_wmb(); } #endif /* __LINUX_MCS_SPINLOCK_H */ -- 1.7.4.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org