All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: optimize __cond_resched_lock()
@ 2021-12-21  7:23 xuhaifeng
  2021-12-21  8:52 ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: xuhaifeng @ 2021-12-21  7:23 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot
  Cc: linux-kernel, xuhaifeng

if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
called twice, once via spin_unlock, once via preempt_schedule_common.

we can add one conditional, check TIF_NEED_RESCHED flag again,
to avoid this.

Signed-off-by: xuhaifeng <xuhaifeng@oppo.com>
---
 kernel/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 83872f95a1ea..fb011e613497 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8219,7 +8219,7 @@ int __cond_resched_lock(spinlock_t *lock)

        if (spin_needbreak(lock) || resched) {
                spin_unlock(lock);
-               if (resched)
+               if (resched && need_resched())
                        preempt_schedule_common();
                else
                        cpu_relax();
--
2.17.1

________________________________
OPPO

本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。

This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: optimize __cond_resched_lock()
  2021-12-21  7:23 [PATCH] sched: optimize __cond_resched_lock() xuhaifeng
@ 2021-12-21  8:52 ` Peter Zijlstra
  2021-12-21 10:09   ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2021-12-21  8:52 UTC (permalink / raw)
  To: xuhaifeng
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, linux-kernel

On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
> if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
> called twice, once via spin_unlock, once via preempt_schedule_common.
> 
> we can add one conditional, check TIF_NEED_RESCHED flag again,
> to avoid this.

You can also make it more similar to __cond_resched() instead of making
it more different.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: optimize __cond_resched_lock()
  2021-12-21  8:52 ` Peter Zijlstra
@ 2021-12-21 10:09   ` Peter Zijlstra
  2021-12-21 12:01     ` Peter Zijlstra
                       ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Peter Zijlstra @ 2021-12-21 10:09 UTC (permalink / raw)
  To: xuhaifeng
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, linux-kernel, Frederic Weisbecker

On Tue, Dec 21, 2021 at 09:52:28AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
> > if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
> > called twice, once via spin_unlock, once via preempt_schedule_common.
> > 
> > we can add one conditional, check TIF_NEED_RESCHED flag again,
> > to avoid this.
> 
> You can also make it more similar to __cond_resched() instead of making
> it more different.

Bah, sorry, had to wake up first :/

cond_resched_lock still needs to exist for PREEMPT because locks won't
magically release themselves.

Still don't much like the patch though, how's this work for you?

That's arguably the right thing to do work PREEMPT_DYNAMIC too.

---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 83872f95a1ea..79d3d5e15c4c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8192,6 +8192,11 @@ int __sched __cond_resched(void)
 	return 0;
 }
 EXPORT_SYMBOL(__cond_resched);
+#else
+static inline int __cond_resched(void)
+{
+	return 0;
+}
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
@@ -8219,9 +8224,7 @@ int __cond_resched_lock(spinlock_t *lock)
 
 	if (spin_needbreak(lock) || resched) {
 		spin_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		spin_lock(lock);
@@ -8239,9 +8242,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		read_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		read_lock(lock);
@@ -8259,9 +8260,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		write_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		write_lock(lock);

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: optimize __cond_resched_lock()
  2021-12-21 10:09   ` Peter Zijlstra
@ 2021-12-21 12:01     ` Peter Zijlstra
  2021-12-21 13:30     ` xuhaifeng
  2022-01-18 11:18     ` [tip: sched/urgent] sched: Avoid double preemption in __cond_resched_*lock*() tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2021-12-21 12:01 UTC (permalink / raw)
  To: xuhaifeng
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, linux-kernel, Frederic Weisbecker

On Tue, Dec 21, 2021 at 11:09:00AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 21, 2021 at 09:52:28AM +0100, Peter Zijlstra wrote:
> > On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
> > > if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
> > > called twice, once via spin_unlock, once via preempt_schedule_common.
> > > 
> > > we can add one conditional, check TIF_NEED_RESCHED flag again,
> > > to avoid this.
> > 
> > You can also make it more similar to __cond_resched() instead of making
> > it more different.
> 
> Bah, sorry, had to wake up first :/
> 
> cond_resched_lock still needs to exist for PREEMPT because locks won't
> magically release themselves.
> 
> Still don't much like the patch though, how's this work for you?
> 
> That's arguably the right thing to do work PREEMPT_DYNAMIC too.

Duh, those would need cond_resched() proper, clearly I should just give
up and call it a holiday already..

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched: optimize __cond_resched_lock()
  2021-12-21 10:09   ` Peter Zijlstra
  2021-12-21 12:01     ` Peter Zijlstra
@ 2021-12-21 13:30     ` xuhaifeng
  2022-01-18 11:18     ` [tip: sched/urgent] sched: Avoid double preemption in __cond_resched_*lock*() tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 6+ messages in thread
From: xuhaifeng @ 2021-12-21 13:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, linux-kernel

Thanks for your review and suggestion.
It doesn't work if CONFIG_PREEMPTION=y and CONFIG_PREEMPT_DYNAMIC=y.
Can i change the patch like this?
---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 83872f95a1ea..9b1e42f8ee60 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8202,6 +8202,15 @@ DEFINE_STATIC_CALL_RET0(might_resched,
__cond_resched);
 EXPORT_STATIC_CALL_TRAMP(might_resched);
 #endif

+static inline int cond_resched_preempt(void)
+{
+#ifdef CONFIG_PREEMPTION
+       return 0;
+#else
+       return __cond_resched();
+#endif
+}
+
 /*
  * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
  * call schedule, and on return reacquire the lock.
@@ -8219,9 +8228,7 @@ int __cond_resched_lock(spinlock_t *lock)

        if (spin_needbreak(lock) || resched) {
                spin_unlock(lock);
-               if (resched)
-                       preempt_schedule_common();
-               else
+               if(!cond_resched_preempt())
                        cpu_relax();
                ret = 1;
                spin_lock(lock);
@@ -8239,9 +8246,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)

        if (rwlock_needbreak(lock) || resched) {
                read_unlock(lock);
-               if (resched)
-                       preempt_schedule_common();
-               else
+               if(!cond_resched_preempt())
                        cpu_relax();
                ret = 1;
                read_lock(lock);
@@ -8259,9 +8264,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)

        if (rwlock_needbreak(lock) || resched) {
                write_unlock(lock);
-               if (resched)
-                       preempt_schedule_common();
-               else
+               if(!cond_resched_preempt())
                        cpu_relax();
                ret = 1;
                write_lock(lock);

On 12/21/2021 6:09 PM, Peter Zijlstra wrote:
> On Tue, Dec 21, 2021 at 09:52:28AM +0100, Peter Zijlstra wrote:
>> On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
>>> if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
>>> called twice, once via spin_unlock, once via preempt_schedule_common.
>>>
>>> we can add one conditional, check TIF_NEED_RESCHED flag again,
>>> to avoid this.
>>
>> You can also make it more similar to __cond_resched() instead of making
>> it more different.
>
> Bah, sorry, had to wake up first :/
>
> cond_resched_lock still needs to exist for PREEMPT because locks won't
> magically release themselves.
>
> Still don't much like the patch though, how's this work for you?
>
> That's arguably the right thing to do work PREEMPT_DYNAMIC too.
>
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 83872f95a1ea..79d3d5e15c4c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8192,6 +8192,11 @@ int __sched __cond_resched(void)
>       return 0;
>  }
>  EXPORT_SYMBOL(__cond_resched);
> +#else
> +static inline int __cond_resched(void)
> +{
> +     return 0;
> +}
>  #endif
>
>  #ifdef CONFIG_PREEMPT_DYNAMIC
> @@ -8219,9 +8224,7 @@ int __cond_resched_lock(spinlock_t *lock)
>
>       if (spin_needbreak(lock) || resched) {
>               spin_unlock(lock);
> -             if (resched)
> -                     preempt_schedule_common();
> -             else
> +             if (!__cond_resched())
>                       cpu_relax();
>               ret = 1;
>               spin_lock(lock);
> @@ -8239,9 +8242,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
>
>       if (rwlock_needbreak(lock) || resched) {
>               read_unlock(lock);
> -             if (resched)
> -                     preempt_schedule_common();
> -             else
> +             if (!__cond_resched())
>                       cpu_relax();
>               ret = 1;
>               read_lock(lock);
> @@ -8259,9 +8260,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
>
>       if (rwlock_needbreak(lock) || resched) {
>               write_unlock(lock);
> -             if (resched)
> -                     preempt_schedule_common();
> -             else
> +             if (!__cond_resched())
>                       cpu_relax();
>               ret = 1;
>               write_lock(lock);
________________________________
OPPO

本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。

This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [tip: sched/urgent] sched: Avoid double preemption in __cond_resched_*lock*()
  2021-12-21 10:09   ` Peter Zijlstra
  2021-12-21 12:01     ` Peter Zijlstra
  2021-12-21 13:30     ` xuhaifeng
@ 2022-01-18 11:18     ` tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 6+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2022-01-18 11:18 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: xuhaifeng, Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     7e406d1ff39b8ee574036418a5043c86723170cf
Gitweb:        https://git.kernel.org/tip/7e406d1ff39b8ee574036418a5043c86723170cf
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Sat, 25 Dec 2021 01:04:57 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 18 Jan 2022 12:09:59 +01:00

sched: Avoid double preemption in __cond_resched_*lock*()

For PREEMPT/DYNAMIC_PREEMPT the *_unlock() will already trigger a
preemption, no point in then calling preempt_schedule_common()
*again*.

Use _cond_resched() instead, since this is a NOP for the preemptible
configs while it provide a preemption point for the others.

Reported-by: xuhaifeng <xuhaifeng@oppo.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/YcGnvDEYBwOiV0cR@hirez.programming.kicks-ass.net
---
 kernel/sched/core.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0d2ab2a..56b428c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8218,9 +8218,7 @@ int __cond_resched_lock(spinlock_t *lock)
 
 	if (spin_needbreak(lock) || resched) {
 		spin_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!_cond_resched())
 			cpu_relax();
 		ret = 1;
 		spin_lock(lock);
@@ -8238,9 +8236,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		read_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!_cond_resched())
 			cpu_relax();
 		ret = 1;
 		read_lock(lock);
@@ -8258,9 +8254,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		write_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!_cond_resched())
 			cpu_relax();
 		ret = 1;
 		write_lock(lock);

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-01-18 11:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-21  7:23 [PATCH] sched: optimize __cond_resched_lock() xuhaifeng
2021-12-21  8:52 ` Peter Zijlstra
2021-12-21 10:09   ` Peter Zijlstra
2021-12-21 12:01     ` Peter Zijlstra
2021-12-21 13:30     ` xuhaifeng
2022-01-18 11:18     ` [tip: sched/urgent] sched: Avoid double preemption in __cond_resched_*lock*() tip-bot2 for Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.