All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption
@ 2021-09-26 10:16 Yanfei Xu
  2021-09-26 10:16 ` [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes Yanfei Xu
  2021-09-26 19:16 ` [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Waiman Long
  0 siblings, 2 replies; 7+ messages in thread
From: Yanfei Xu @ 2021-09-26 10:16 UTC (permalink / raw)
  To: peterz, mingo, will, longman, boqun.feng; +Cc: linux-kernel

preempt_disable/enable() is equal to RCU read-side crital section,
and the mutex lock slowpath disable the preemption throughout the
entire slowpath. Let's remove the rcu_read_lock/unlock for saving
some cycles in hot codes.

Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
---
 kernel/locking/mutex.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d456579d0952..54658c5db7d0 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -348,13 +348,13 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 {
 	bool ret = true;
 
-	rcu_read_lock();
 	while (__mutex_owner(lock) == owner) {
 		/*
 		 * Ensure we emit the owner->on_cpu, dereference _after_
-		 * checking lock->owner still matches owner. If that fails,
-		 * owner might point to freed memory. If it still matches,
-		 * the rcu_read_lock() ensures the memory stays valid.
+		 * checking lock->owner still matches owner. And we already
+		 * disabled preemption which is equal to the RCU read-side
+		 * crital section throughout the entire progress of the mutex
+		 * lock slowpath, thus the memory stays valid.
 		 */
 		barrier();
 
@@ -374,7 +374,6 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
 
 		cpu_relax();
 	}
-	rcu_read_unlock();
 
 	return ret;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes
  2021-09-26 10:16 [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Yanfei Xu
@ 2021-09-26 10:16 ` Yanfei Xu
  2021-09-26 19:22   ` Waiman Long
  2021-09-26 19:16 ` [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Waiman Long
  1 sibling, 1 reply; 7+ messages in thread
From: Yanfei Xu @ 2021-09-26 10:16 UTC (permalink / raw)
  To: peterz, mingo, will, longman, boqun.feng; +Cc: linux-kernel

Use rcu_read_lock_sched to simplify the codes, and it also saves
some cycles of handling rcu nesting counter.

Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
---
 kernel/locking/rwsem.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 000e8d5a2884..7afadfe02286 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 		return false;
 	}
 
-	preempt_disable();
-	rcu_read_lock();
+	rcu_read_lock_sched();
 	owner = rwsem_owner_flags(sem, &flags);
 	/*
 	 * Don't check the read-owner as the entry may be stale.
@@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 	if ((flags & RWSEM_NONSPINNABLE) ||
 	    (owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
 		ret = false;
-	rcu_read_unlock();
-	preempt_enable();
+	rcu_read_unlock_sched();
 
 	lockevent_cond_inc(rwsem_opt_fail, !ret);
 	return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption
  2021-09-26 10:16 [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Yanfei Xu
  2021-09-26 10:16 ` [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes Yanfei Xu
@ 2021-09-26 19:16 ` Waiman Long
  2021-09-27  0:46   ` Waiman Long
  1 sibling, 1 reply; 7+ messages in thread
From: Waiman Long @ 2021-09-26 19:16 UTC (permalink / raw)
  To: Yanfei Xu, peterz, mingo, will, boqun.feng; +Cc: linux-kernel

On 9/26/21 6:16 AM, Yanfei Xu wrote:
> preempt_disable/enable() is equal to RCU read-side crital section,
> and the mutex lock slowpath disable the preemption throughout the
> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
> some cycles in hot codes.

The description is wrong. Preemption is disabled only in the optimistic 
spinning code which is not the complete slowpath. Even though it may 
sound reasonable that disable preemption is likely to prevent reaching 
quiescent state, but I am not totally sure that will always be the case 
as there are different RCU favors.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes
  2021-09-26 10:16 ` [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes Yanfei Xu
@ 2021-09-26 19:22   ` Waiman Long
  2021-09-27 16:41     ` Xu, Yanfei
  0 siblings, 1 reply; 7+ messages in thread
From: Waiman Long @ 2021-09-26 19:22 UTC (permalink / raw)
  To: Yanfei Xu, peterz, mingo, will, boqun.feng; +Cc: linux-kernel

On 9/26/21 6:16 AM, Yanfei Xu wrote:
> Use rcu_read_lock_sched to simplify the codes, and it also saves
> some cycles of handling rcu nesting counter.
>
> Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
> ---
>   kernel/locking/rwsem.c | 6 ++----
>   1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 000e8d5a2884..7afadfe02286 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
>   		return false;
>   	}
>   
> -	preempt_disable();
> -	rcu_read_lock();
> +	rcu_read_lock_sched();
>   	owner = rwsem_owner_flags(sem, &flags);
>   	/*
>   	 * Don't check the read-owner as the entry may be stale.
> @@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
>   	if ((flags & RWSEM_NONSPINNABLE) ||
>   	    (owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
>   		ret = false;
> -	rcu_read_unlock();
> -	preempt_enable();
> +	rcu_read_unlock_sched();
>   
>   	lockevent_cond_inc(rwsem_opt_fail, !ret);
>   	return ret;

I don't think there is any performance gain with this change. I would 
prefer the original code that is more readable as some people may not 
know rcu_read_lock_sched() will disable preemption if they don't look 
into it.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption
  2021-09-26 19:16 ` [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Waiman Long
@ 2021-09-27  0:46   ` Waiman Long
  2021-09-27  9:28     ` Xu, Yanfei
  0 siblings, 1 reply; 7+ messages in thread
From: Waiman Long @ 2021-09-27  0:46 UTC (permalink / raw)
  To: Yanfei Xu, peterz, mingo, will, boqun.feng; +Cc: linux-kernel

On 9/26/21 3:16 PM, Waiman Long wrote:
> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>> preempt_disable/enable() is equal to RCU read-side crital section,
>> and the mutex lock slowpath disable the preemption throughout the
>> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
>> some cycles in hot codes.
>
> The description is wrong. Preemption is disabled only in the 
> optimistic spinning code which is not the complete slowpath. Even 
> though it may sound reasonable that disable preemption is likely to 
> prevent reaching quiescent state, but I am not totally sure that will 
> always be the case as there are different RCU favors. 

It does look like that disable preemption can serve as a substitute for 
rcu_read_lock(). However, I will suggest that you also insert a comment 
saying so and the task structure won't go away during the spinning period.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption
  2021-09-27  0:46   ` Waiman Long
@ 2021-09-27  9:28     ` Xu, Yanfei
  0 siblings, 0 replies; 7+ messages in thread
From: Xu, Yanfei @ 2021-09-27  9:28 UTC (permalink / raw)
  To: Waiman Long, peterz, mingo, will, boqun.feng; +Cc: linux-kernel



On 9/27/21 8:46 AM, Waiman Long wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
> 
> On 9/26/21 3:16 PM, Waiman Long wrote:
>> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>>> preempt_disable/enable() is equal to RCU read-side crital section,
>>> and the mutex lock slowpath disable the preemption throughout the
>>> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
>>> some cycles in hot codes.
>>
>> The description is wrong. Preemption is disabled only in the
>> optimistic spinning code which is not the complete slowpath. Even
>> though it may sound reasonable that disable preemption is likely to
>> prevent reaching quiescent state, but I am not totally sure that will
>> always be the case as there are different RCU favors.
> 
> It does look like that disable preemption can serve as a substitute for
> rcu_read_lock(). However, I will suggest that you also insert a comment
> saying so and the task structure won't go away during the spinning period.
> 

Will send v2, thanks.

Yanfei

> Cheers,
> Longman
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes
  2021-09-26 19:22   ` Waiman Long
@ 2021-09-27 16:41     ` Xu, Yanfei
  0 siblings, 0 replies; 7+ messages in thread
From: Xu, Yanfei @ 2021-09-27 16:41 UTC (permalink / raw)
  To: Waiman Long, peterz, mingo, will, boqun.feng; +Cc: linux-kernel



On 9/27/21 3:22 AM, Waiman Long wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
> 
> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>> Use rcu_read_lock_sched to simplify the codes, and it also saves
>> some cycles of handling rcu nesting counter.
>>
>> Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
>> ---
>>   kernel/locking/rwsem.c | 6 ++----
>>   1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
>> index 000e8d5a2884..7afadfe02286 100644
>> --- a/kernel/locking/rwsem.c
>> +++ b/kernel/locking/rwsem.c
>> @@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct 
>> rw_semaphore *sem)
>>               return false;
>>       }
>>
>> -     preempt_disable();
>> -     rcu_read_lock();
>> +     rcu_read_lock_sched();
>>       owner = rwsem_owner_flags(sem, &flags);
>>       /*
>>        * Don't check the read-owner as the entry may be stale.
>> @@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct 
>> rw_semaphore *sem)
>>       if ((flags & RWSEM_NONSPINNABLE) ||
>>           (owner && !(flags & RWSEM_READER_OWNED) && 
>> !owner_on_cpu(owner)))
>>               ret = false;
>> -     rcu_read_unlock();
>> -     preempt_enable();
>> +     rcu_read_unlock_sched();
>>
>>       lockevent_cond_inc(rwsem_opt_fail, !ret);
>>       return ret;
> 
> I don't think there is any performance gain with this change. I would
> prefer the original code that is more readable as some people may not
> know rcu_read_lock_sched() will disable preemption if they don't look
> into it.
> 

OK, thanks for comments.

Yanfei

> Cheers,
> Longman
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-09-27 16:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-26 10:16 [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Yanfei Xu
2021-09-26 10:16 ` [PATCH 2/2] locking/rwsem: Use rcu_read_lock_sched to simplify codes Yanfei Xu
2021-09-26 19:22   ` Waiman Long
2021-09-27 16:41     ` Xu, Yanfei
2021-09-26 19:16 ` [PATCH 1/2] locking/mutex: remove rcu_read_lock/unlock as we already disabled preemption Waiman Long
2021-09-27  0:46   ` Waiman Long
2021-09-27  9:28     ` Xu, Yanfei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.