From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751054AbcGMFHh (ORCPT ); Wed, 13 Jul 2016 01:07:37 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33126 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750972AbcGMFHa (ORCPT ); Wed, 13 Jul 2016 01:07:30 -0400 From: Manfred Spraul To: "H. Peter Anvin" , Peter Zijlstra , Andrew Morton , Davidlohr Bueso Cc: LKML , Thomas Gleixner , Ingo Molnar , 1vier1@web.de, felixh@informatik.uni-bremen.de, Manfred Spraul Subject: [PATCH 2/2] ipc/sem.c: Remove duplicated memory barriers. Date: Wed, 13 Jul 2016 07:06:52 +0200 Message-Id: <1468386412-3608-3-git-send-email-manfred@colorfullife.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1468386412-3608-2-git-send-email-manfred@colorfullife.com> References: <1468386412-3608-1-git-send-email-manfred@colorfullife.com> <1468386412-3608-2-git-send-email-manfred@colorfullife.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With 2c610022711 (locking/qspinlock: Fix spin_unlock_wait() some more), memory barriers were added into spin_unlock_wait(). Thus another barrier is not required. And as explained in 055ce0fd1b8 (locking/qspinlock: Add comments), spin_lock() provides a barrier so that reads within the critical section cannot happen before the write for the lock is visible. i.e. spin_lock provides an acquire barrier after the write of the lock variable, this barrier pairs with the smp_mb() in complexmode_enter(). Please review! For x86, the patch is safe. But I don't know enough about all archs that support SMP. Signed-off-by: Manfred Spraul --- ipc/sem.c | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index 0da63c8..d7b4212 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -291,14 +291,6 @@ static void complexmode_enter(struct sem_array *sma) sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); } - /* - * spin_unlock_wait() is not a memory barriers, it is only a - * control barrier. The code must pair with spin_unlock(&sem->lock), - * thus just the control barrier is insufficient. - * - * smp_rmb() is sufficient, as writes cannot pass the control barrier. - */ - smp_rmb(); } /* @@ -363,12 +355,6 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, */ spin_lock(&sem->lock); - /* - * A full barrier is required: the write of sem->lock - * must be visible before the read is executed - */ - smp_mb(); - if (!smp_load_acquire(&sma->complex_mode)) { /* fast path successful! */ return sops->sem_num; -- 2.5.5