All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] sched: Spare IPI on single task renice
@ 2019-12-03 16:01 Frederic Weisbecker
  2019-12-03 16:01 ` [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task Frederic Weisbecker
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Frederic Weisbecker @ 2019-12-03 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: LKML, Frederic Weisbecker

A couple of patches to avoid disturbing nohz_full CPUs when a single
fair task get its priority changed. I should probably check other sched
policies as well...

Frederic Weisbecker (2):
  sched: Spare resched IPI when prio changes on a single fair task
  sched: Use fair:prio_changed() instead of ad-hoc implementation

 kernel/sched/core.c | 16 ++++++++--------
 kernel/sched/fair.c |  2 ++
 2 files changed, 10 insertions(+), 8 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task
  2019-12-03 16:01 [PATCH 0/2] sched: Spare IPI on single task renice Frederic Weisbecker
@ 2019-12-03 16:01 ` Frederic Weisbecker
  2019-12-17 12:39   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
  2019-12-03 16:01 ` [PATCH 2/2] sched: Use fair:prio_changed() instead of ad-hoc implementation Frederic Weisbecker
  2019-12-04 12:20 ` [PATCH 0/2] sched: Spare IPI on single task renice Peter Zijlstra
  2 siblings, 1 reply; 6+ messages in thread
From: Frederic Weisbecker @ 2019-12-03 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: LKML, Frederic Weisbecker

The runqueue of a fair task being remotely reniced is going to get a
resched IPI in order to reassess which task should be the current
running on the CPU. However that evaluation is useless if the fair task
is running alone, in which case we can spare that IPI, preventing
nohz_full CPUs from being disturbed.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/sched/fair.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e97a01..6d2560cb24f0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10322,6 +10322,8 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
 	if (!task_on_rq_queued(p))
 		return;
 
+	if (rq->cfs.nr_running == 1)
+		return;
 	/*
 	 * Reschedule if we are currently running on this runqueue and
 	 * our priority decreased, or if we are not currently running on
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] sched: Use fair:prio_changed() instead of ad-hoc implementation
  2019-12-03 16:01 [PATCH 0/2] sched: Spare IPI on single task renice Frederic Weisbecker
  2019-12-03 16:01 ` [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task Frederic Weisbecker
@ 2019-12-03 16:01 ` Frederic Weisbecker
  2019-12-17 12:39   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
  2019-12-04 12:20 ` [PATCH 0/2] sched: Spare IPI on single task renice Peter Zijlstra
  2 siblings, 1 reply; 6+ messages in thread
From: Frederic Weisbecker @ 2019-12-03 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: LKML, Frederic Weisbecker

set_user_nice() implements its own version of fair::prio_changed() and
therefore misses a specific optimization towards nohz_full CPUs that
avoid sending an resched IPI to a reniced task running alone. Use the
proper callback instead.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/sched/core.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 90e4b00ace89..66d1af94455e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4540,17 +4540,17 @@ void set_user_nice(struct task_struct *p, long nice)
 	p->prio = effective_prio(p);
 	delta = p->prio - old_prio;
 
-	if (queued) {
+	if (queued)
 		enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK);
-		/*
-		 * If the task increased its priority or is running and
-		 * lowered its priority, then reschedule its CPU:
-		 */
-		if (delta < 0 || (delta > 0 && task_running(rq, p)))
-			resched_curr(rq);
-	}
+
 	if (running)
 		set_next_task(rq, p);
+	/*
+	 * If the task increased its priority or is running and
+	 * lowered its priority, then reschedule its CPU:
+	 */
+	p->sched_class->prio_changed(rq, p, old_prio);
+
 out_unlock:
 	task_rq_unlock(rq, p, &rf);
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] sched: Spare IPI on single task renice
  2019-12-03 16:01 [PATCH 0/2] sched: Spare IPI on single task renice Frederic Weisbecker
  2019-12-03 16:01 ` [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task Frederic Weisbecker
  2019-12-03 16:01 ` [PATCH 2/2] sched: Use fair:prio_changed() instead of ad-hoc implementation Frederic Weisbecker
@ 2019-12-04 12:20 ` Peter Zijlstra
  2 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2019-12-04 12:20 UTC (permalink / raw)
  To: Frederic Weisbecker; +Cc: Ingo Molnar, LKML

On Tue, Dec 03, 2019 at 05:01:04PM +0100, Frederic Weisbecker wrote:
> A couple of patches to avoid disturbing nohz_full CPUs when a single
> fair task get its priority changed. I should probably check other sched
> policies as well...

Yeah, auditing all that sounds like a good idea.

> Frederic Weisbecker (2):
>   sched: Spare resched IPI when prio changes on a single fair task
>   sched: Use fair:prio_changed() instead of ad-hoc implementation

These look good though, thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tip: sched/core] sched: Use fair:prio_changed() instead of ad-hoc implementation
  2019-12-03 16:01 ` [PATCH 2/2] sched: Use fair:prio_changed() instead of ad-hoc implementation Frederic Weisbecker
@ 2019-12-17 12:39   ` tip-bot2 for Frederic Weisbecker
  0 siblings, 0 replies; 6+ messages in thread
From: tip-bot2 for Frederic Weisbecker @ 2019-12-17 12:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar, x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     5443a0be6121d557e12951537e10159e4c61035d
Gitweb:        https://git.kernel.org/tip/5443a0be6121d557e12951537e10159e4c61035d
Author:        Frederic Weisbecker <frederic@kernel.org>
AuthorDate:    Tue, 03 Dec 2019 17:01:06 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 17 Dec 2019 13:32:50 +01:00

sched: Use fair:prio_changed() instead of ad-hoc implementation

set_user_nice() implements its own version of fair::prio_changed() and
therefore misses a specific optimization towards nohz_full CPUs that
avoid sending an resched IPI to a reniced task running alone. Use the
proper callback instead.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191203160106.18806-3-frederic@kernel.org
---
 kernel/sched/core.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 90e4b00..15508c2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4540,17 +4540,17 @@ void set_user_nice(struct task_struct *p, long nice)
 	p->prio = effective_prio(p);
 	delta = p->prio - old_prio;
 
-	if (queued) {
+	if (queued)
 		enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK);
-		/*
-		 * If the task increased its priority or is running and
-		 * lowered its priority, then reschedule its CPU:
-		 */
-		if (delta < 0 || (delta > 0 && task_running(rq, p)))
-			resched_curr(rq);
-	}
 	if (running)
 		set_next_task(rq, p);
+
+	/*
+	 * If the task increased its priority or is running and
+	 * lowered its priority, then reschedule its CPU:
+	 */
+	p->sched_class->prio_changed(rq, p, old_prio);
+
 out_unlock:
 	task_rq_unlock(rq, p, &rf);
 }

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [tip: sched/core] sched: Spare resched IPI when prio changes on a single fair task
  2019-12-03 16:01 ` [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task Frederic Weisbecker
@ 2019-12-17 12:39   ` tip-bot2 for Frederic Weisbecker
  0 siblings, 0 replies; 6+ messages in thread
From: tip-bot2 for Frederic Weisbecker @ 2019-12-17 12:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra, Frederic Weisbecker, Ingo Molnar, x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7c2e8bbd87db661122e92d71a394dd7bb3ada4d3
Gitweb:        https://git.kernel.org/tip/7c2e8bbd87db661122e92d71a394dd7bb3ada4d3
Author:        Frederic Weisbecker <frederic@kernel.org>
AuthorDate:    Tue, 03 Dec 2019 17:01:05 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 17 Dec 2019 13:32:50 +01:00

sched: Spare resched IPI when prio changes on a single fair task

The runqueue of a fair task being remotely reniced is going to get a
resched IPI in order to reassess which task should be the current
running on the CPU. However that evaluation is useless if the fair task
is running alone, in which case we can spare that IPI, preventing
nohz_full CPUs from being disturbed.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191203160106.18806-2-frederic@kernel.org
---
 kernel/sched/fair.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e..846f50b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10322,6 +10322,9 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
 	if (!task_on_rq_queued(p))
 		return;
 
+	if (rq->cfs.nr_running == 1)
+		return;
+
 	/*
 	 * Reschedule if we are currently running on this runqueue and
 	 * our priority decreased, or if we are not currently running on

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-12-17 12:40 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-03 16:01 [PATCH 0/2] sched: Spare IPI on single task renice Frederic Weisbecker
2019-12-03 16:01 ` [PATCH 1/2] sched: Spare resched IPI when prio changes on a single fair task Frederic Weisbecker
2019-12-17 12:39   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
2019-12-03 16:01 ` [PATCH 2/2] sched: Use fair:prio_changed() instead of ad-hoc implementation Frederic Weisbecker
2019-12-17 12:39   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
2019-12-04 12:20 ` [PATCH 0/2] sched: Spare IPI on single task renice Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.