From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx3-rdu2.redhat.com ([66.187.233.73]:58222 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750882AbeERQzi (ORCPT ); Fri, 18 May 2018 12:55:38 -0400 Date: Fri, 18 May 2018 18:55:35 +0200 From: Oleg Nesterov To: Ingo Molnar Cc: Matthew Wilcox , Waiman Long , Ingo Molnar , Peter Zijlstra , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Davidlohr Bueso , "Theodore Y. Ts'o" , Amir Goldstein , Jan Kara Subject: [PATCH] locking/rwsem: simplify the is-owner-spinnable checks Message-ID: <20180518165534.GA22348@redhat.com> References: <1526420991-21213-1-git-send-email-longman@redhat.com> <1526420991-21213-2-git-send-email-longman@redhat.com> <20180516121947.GE20670@bombadil.infradead.org> <20180518070258.GA20971@gmail.com> <20180518084122.GA14307@redhat.com> <20180518094052.GA26150@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180518094052.GA26150@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Add the trivial owner_on_cpu() helper for rwsem_can_spin_on_owner() and rwsem_spin_on_owner(), it also allows to make rwsem_can_spin_on_owner() a bit more clear. Signed-off-by: Oleg Nesterov --- kernel/locking/rwsem-xadd.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index a903367..3064c50 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -347,6 +347,15 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) } } +static inline bool owner_on_cpu(struct task_struct *owner) +{ + /* + * As lock holder preemption issue, we both skip spinning if + * task is not on cpu or its cpu is preempted + */ + return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner)); +} + static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) { struct task_struct *owner; @@ -359,17 +368,10 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) rcu_read_lock(); owner = READ_ONCE(sem->owner); - if (!owner || !is_rwsem_owner_spinnable(owner)) { - ret = !owner; /* !owner is spinnable */ - goto done; + if (owner) { + ret = is_rwsem_owner_spinnable(owner) && + owner_on_cpu(owner); } - - /* - * As lock holder preemption issue, we both skip spinning if task is not - * on cpu or its cpu is preempted - */ - ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner)); -done: rcu_read_unlock(); return ret; } @@ -398,8 +400,7 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem) * abort spinning when need_resched or owner is not running or * owner's cpu is preempted. */ - if (!owner->on_cpu || need_resched() || - vcpu_is_preempted(task_cpu(owner))) { + if (need_resched() || !owner_on_cpu(owner)) { rcu_read_unlock(); return false; } -- 2.5.0