From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754805AbbDHTjy (ORCPT ); Wed, 8 Apr 2015 15:39:54 -0400 Received: from g4t3425.houston.hp.com ([15.201.208.53]:41837 "EHLO g4t3425.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753729AbbDHTjw (ORCPT ); Wed, 8 Apr 2015 15:39:52 -0400 From: Jason Low To: Peter Zijlstra , Ingo Molnar , Linus Torvalds , Davidlohr Bueso , Tim Chen , Aswin Chandramouleeswaran Cc: LKML , Jason Low Subject: [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner() Date: Wed, 8 Apr 2015 12:39:20 -0700 Message-Id: <1428521960-5268-3-git-send-email-jason.low2@hp.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1428521960-5268-1-git-send-email-jason.low2@hp.com> References: <1428521960-5268-1-git-send-email-jason.low2@hp.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ingo suggested for mutex_spin_on_owner() that having multiple return statements is not the cleanest approach, especially when holding locks. The same thing applies to the rwsem variant. This patch rewrites much of this function to use a "ret" return value. Signed-off-by: Jason Low --- kernel/locking/rwsem-xadd.c | 25 ++++++++++++------------- 1 files changed, 12 insertions(+), 13 deletions(-) diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index 3417d01..b1c9156 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -327,38 +327,37 @@ done: static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner) { - long count; + bool ret = true; rcu_read_lock(); while (sem->owner == owner) { /* * Ensure we emit the owner->on_cpu, dereference _after_ - * checking sem->owner still matches owner, if that fails, - * owner might point to free()d memory, if it still matches, + * checking sem->owner still matches owner. If that fails, + * owner might point to freed memory. If it still matches, * the rcu_read_lock() ensures the memory stays valid. */ barrier(); - /* abort spinning when need_resched or owner is not running */ + /* Abort spinning when need_resched or owner is not running. */ if (!owner->on_cpu || need_resched()) { - rcu_read_unlock(); - return false; + ret = false; + break; } cpu_relax_lowlatency(); } rcu_read_unlock(); - if (READ_ONCE(sem->owner)) - return true; /* new owner, continue spinning */ - /* * When the owner is not set, the lock could be free or - * held by readers. Check the counter to verify the - * state. + * held by readers. Check the counter to verify the state. */ - count = READ_ONCE(sem->count); - return (count == 0 || count == RWSEM_WAITING_BIAS); + if (!READ_ONCE(sem->owner)) { + long count = READ_ONCE(sem->count); + ret = (count == 0 || count == RWSEM_WAITING_BIAS); + } + return ret; } static bool rwsem_optimistic_spin(struct rw_semaphore *sem) -- 1.7.2.5