From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Subject: Re: [PATCH] locking/rwsem: simplify the is-owner-spinnable checks To: Oleg Nesterov , Ingo Molnar Cc: Matthew Wilcox , Ingo Molnar , Peter Zijlstra , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Davidlohr Bueso , "Theodore Y. Ts'o" , Amir Goldstein , Jan Kara References: <1526420991-21213-1-git-send-email-longman@redhat.com> <1526420991-21213-2-git-send-email-longman@redhat.com> <20180516121947.GE20670@bombadil.infradead.org> <20180518070258.GA20971@gmail.com> <20180518084122.GA14307@redhat.com> <20180518094052.GA26150@gmail.com> <20180518165534.GA22348@redhat.com> From: Waiman Long Message-ID: Date: Fri, 18 May 2018 13:00:11 -0400 MIME-Version: 1.0 In-Reply-To: <20180518165534.GA22348@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-ID: On 05/18/2018 12:55 PM, Oleg Nesterov wrote: > Add the trivial owner_on_cpu() helper for rwsem_can_spin_on_owner() and > rwsem_spin_on_owner(), it also allows to make rwsem_can_spin_on_owner() > a bit more clear. > > Signed-off-by: Oleg Nesterov > --- > kernel/locking/rwsem-xadd.c | 25 +++++++++++++------------ > 1 file changed, 13 insertions(+), 12 deletions(-) > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > index a903367..3064c50 100644 > --- a/kernel/locking/rwsem-xadd.c > +++ b/kernel/locking/rwsem-xadd.c > @@ -347,6 +347,15 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) > } > } > > +static inline bool owner_on_cpu(struct task_struct *owner) > +{ > + /* > + * As lock holder preemption issue, we both skip spinning if > + * task is not on cpu or its cpu is preempted > + */ > + return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner)); > +} > + > static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) > { > struct task_struct *owner; > @@ -359,17 +368,10 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) > > rcu_read_lock(); > owner = READ_ONCE(sem->owner); > - if (!owner || !is_rwsem_owner_spinnable(owner)) { > - ret = !owner; /* !owner is spinnable */ > - goto done; > + if (owner) { > + ret = is_rwsem_owner_spinnable(owner) && > + owner_on_cpu(owner); > } > - > - /* > - * As lock holder preemption issue, we both skip spinning if task is not > - * on cpu or its cpu is preempted > - */ > - ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner)); > -done: > rcu_read_unlock(); > return ret; > } > @@ -398,8 +400,7 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem) > * abort spinning when need_resched or owner is not running or > * owner's cpu is preempted. > */ > - if (!owner->on_cpu || need_resched() || > - vcpu_is_preempted(task_cpu(owner))) { > + if (need_resched() || !owner_on_cpu(owner)) { > rcu_read_unlock(); > return false; > } Acked-by: Waiman Long