linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Phil Auld <pauld@redhat.com>, Ming Lei <ming.lei@redhat.com>,
	linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jeff Moyer <jmoyer@redhat.com>,
	Dave Chinner <dchinner@redhat.com>,
	Eric Sandeen <sandeen@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Jens Axboe <axboe@kernel.dk>, Ingo Molnar <mingo@redhat.com>,
	Tejun Heo <tj@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [PATCH v4] sched/core: Preempt current task in favour of bound kthread
Date: Thu, 12 Dec 2019 11:10:31 +0100	[thread overview]
Message-ID: <20191212101031.GV2827@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20191211224617.GE19256@dread.disaster.area>

On Thu, Dec 12, 2019 at 09:46:17AM +1100, Dave Chinner wrote:
> On Wed, Dec 11, 2019 at 11:08:29PM +0530, Srikar Dronamraju wrote:
> > A running task can wake-up a per CPU bound kthread on the same CPU.
> > If the current running task doesn't yield the CPU before the next load
> > balance operation, the scheduler would detect load imbalance and try to
> > balance the load. However this load balance would fail as the waiting
> > task is CPU bound, while the running task cannot be moved by the regular
> > load balancer. Finally the active load balancer would kick in and move
> > the task to a different CPU/Core. Moving the task to a different
> > CPU/core can lead to loss in cache affinity leading to poor performance.
> > 
> > This is more prone to happen if the current running task is CPU
> > intensive and the sched_wake_up_granularity is set to larger value.
> > When the sched_wake_up_granularity was relatively small, it was observed
> > that the bound thread would complete before the load balancer would have
> > chosen to move the cache hot task to a different CPU.
> > 
> > To deal with this situation, the current running task would yield to a
> > per CPU bound kthread, provided kthread is not CPU intensive.
> 
> So a question for you here: when does the workqueue worker pre-empt
> the currently running task? Is it immediately? Or when a time-slice
> of the currently running task runs out?
> 
> We don't want queued work immediately pre-empting the task that
> queued the work - the queued work is *deferred* work that should be
> run _soon_ but we want the currently running task to finish what it
> is doing first if possible. 

Good point, something to maybe try (Srikar?) is making tick preemption
more agressive for such tasks.

The below extends the previous patch to retain the set_next_buddy() on
wakeup, but does not make the actual preemption more agressive.

Then it 'fixes' the tick preemption to better align with the actual
scheduler pick (ie. consider the buddy hints).

---
 kernel/sched/core.c  |  3 +++
 kernel/sched/fair.c  | 26 +++++++++++++++++++-------
 kernel/sched/sched.h |  3 ++-
 3 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 90e4b00ace89..75738b136ea7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2560,6 +2560,9 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	success = 1;
 	cpu = task_cpu(p);
 
+	if (is_per_cpu_kthread(p))
+		wake_flags |= WF_KTHREAD;
+
 	/*
 	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
 	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e97a01..78e681c8c19a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4126,6 +4126,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		update_min_vruntime(cfs_rq);
 }
 
+static struct sched_entity *
+__pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr);
+
 /*
  * Preempt the current task with a newly woken task if needed:
  */
@@ -4156,13 +4159,13 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 	if (delta_exec < sysctl_sched_min_granularity)
 		return;
 
-	se = __pick_first_entity(cfs_rq);
+	se = __pick_next_entity(cfs_rq, NULL);
 	delta = curr->vruntime - se->vruntime;
 
 	if (delta < 0)
 		return;
 
-	if (delta > ideal_runtime)
+	if (delta > ideal_runtime) // maybe frob this too ?
 		resched_curr(rq_of(cfs_rq));
 }
 
@@ -4210,7 +4213,7 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
  * 4) do not run the "skip" process, if something else is available
  */
 static struct sched_entity *
-pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+__pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 {
 	struct sched_entity *left = __pick_first_entity(cfs_rq);
 	struct sched_entity *se;
@@ -4255,8 +4258,14 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 	if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) < 1)
 		se = cfs_rq->next;
 
-	clear_buddies(cfs_rq, se);
+	return se;
+}
 
+static struct sched_entity *
+pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+{
+	struct sched_entity *se = __pick_next_entity(cfs_rq, curr);
+	clear_buddies(cfs_rq, se);
 	return se;
 }
 
@@ -6565,7 +6574,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
 	struct sched_entity *se = &curr->se, *pse = &p->se;
 	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
 	int scale = cfs_rq->nr_running >= sched_nr_latency;
-	int next_buddy_marked = 0;
+	int wpe, next_buddy_marked = 0;
 
 	if (unlikely(se == pse))
 		return;
@@ -6612,14 +6621,17 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
 	find_matching_se(&se, &pse);
 	update_curr(cfs_rq_of(se));
 	BUG_ON(!pse);
-	if (wakeup_preempt_entity(se, pse) == 1) {
+	wpe = wakeup_preempt_entity(se, pse);
+	if (wpe >= !(wake_flags & WF_KTHREAD)) {
 		/*
 		 * Bias pick_next to pick the sched entity that is
 		 * triggering this preemption.
 		 */
 		if (!next_buddy_marked)
 			set_next_buddy(pse);
-		goto preempt;
+
+		if (wpe == 1)
+			goto preempt;
 	}
 
 	return;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 280a3c735935..2ee86ef51001 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1643,7 +1643,8 @@ static inline int task_on_rq_migrating(struct task_struct *p)
  */
 #define WF_SYNC			0x01		/* Waker goes to sleep after wakeup */
 #define WF_FORK			0x02		/* Child wakeup after fork */
-#define WF_MIGRATED		0x4		/* Internal use, task got migrated */
+#define WF_MIGRATED		0x04		/* Internal use, task got migrated */
+#define WF_KTHREAD		0x08
 
 /*
  * To aid in avoiding the subversion of "niceness" due to uneven distribution

  reply	other threads:[~2019-12-12 10:10 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-14 11:31 single aio thread is migrated crazily by scheduler Ming Lei
2019-11-14 13:14 ` Peter Zijlstra
2019-11-15  0:09   ` Ming Lei
2019-11-15 14:16     ` Ming Lei
2019-11-14 23:54 ` Dave Chinner
2019-11-15  1:08   ` Ming Lei
2019-11-15  4:56     ` Dave Chinner
2019-11-15  7:08       ` Ming Lei
2019-11-15 23:40         ` Dave Chinner
2019-11-16  6:31           ` Ming Lei
2019-11-18  9:21           ` Peter Zijlstra
2019-11-18 14:54             ` Vincent Guittot
2019-11-18 20:40             ` Dave Chinner
2019-11-20 19:16               ` Peter Zijlstra
2019-11-20 22:03                 ` Phil Auld
2019-11-21  4:12                   ` Ming Lei
2019-11-21 14:12                     ` Phil Auld
2019-11-21 15:02                       ` Boaz Harrosh
2019-11-21 16:19                         ` Jens Axboe
2019-12-09 16:58                           ` Srikar Dronamraju
2019-11-21 22:10                       ` Dave Chinner
2019-11-21 13:29                   ` Peter Zijlstra
2019-11-21 14:21                     ` Phil Auld
2019-12-09 16:51                     ` Srikar Dronamraju
2019-12-09 23:17                       ` Dave Chinner
2019-12-10  3:27                         ` Srikar Dronamraju
2019-12-10  5:43                         ` [PATCH v2] sched/core: Preempt current task in favour of bound kthread Srikar Dronamraju
2019-12-10  9:26                           ` Peter Zijlstra
2019-12-10  9:33                             ` Peter Zijlstra
2019-12-10 10:18                               ` Srikar Dronamraju
2019-12-10 10:16                             ` Srikar Dronamraju
2019-12-10  9:43                           ` Vincent Guittot
2019-12-10 10:11                             ` Srikar Dronamraju
2019-12-10 11:02                               ` Vincent Guittot
2019-12-10 17:23                           ` [PATCH v3] " Srikar Dronamraju
2019-12-11 17:38                             ` [PATCH v4] " Srikar Dronamraju
2019-12-11 22:46                               ` Dave Chinner
2019-12-12 10:10                                 ` Peter Zijlstra [this message]
2019-12-12 10:14                                   ` Peter Zijlstra
2019-12-12 10:23                                     ` Peter Zijlstra
2019-12-12 11:20                                       ` Vincent Guittot
2019-12-12 13:12                                         ` Peter Zijlstra
2019-12-12 15:07                                   ` Srikar Dronamraju
2019-12-12 15:15                                     ` Peter Zijlstra
2019-12-13  5:32                                   ` Srikar Dronamraju
2019-11-18 16:26           ` single aio thread is migrated crazily by scheduler Srikar Dronamraju
2019-11-18 21:18             ` Dave Chinner
2019-11-19  8:54               ` Ming Lei
     [not found]         ` <20191128094003.752-1-hdanton@sina.com>
2019-11-28  9:53           ` Vincent Guittot
2019-12-02  2:46             ` Ming Lei
2019-12-02  4:02               ` Dave Chinner
2019-12-02  4:22                 ` Ming Lei
2019-12-02 13:45                 ` Vincent Guittot
2019-12-02 21:22                   ` Phil Auld
2019-12-03  9:45                     ` Vincent Guittot
2019-12-04 13:50                       ` Ming Lei
2019-12-02 23:53                   ` Dave Chinner
2019-12-03  0:18                     ` Ming Lei
2019-12-03 13:34                     ` Vincent Guittot
2019-12-02  7:39               ` Juri Lelli
2019-12-02  3:08           ` Dave Chinner
     [not found]           ` <20191202090158.15016-1-hdanton@sina.com>
2019-12-02 23:06             ` Dave Chinner
     [not found]             ` <20191203131514.5176-1-hdanton@sina.com>
2019-12-03 22:29               ` Dave Chinner
     [not found]               ` <20191204102903.896-1-hdanton@sina.com>
2019-12-04 22:59                 ` Dave Chinner
2019-11-27  0:43   ` 0da4873c66: xfstests.generic.287.fail kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191212101031.GV2827@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=axboe@kernel.dk \
    --cc=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=hch@lst.de \
    --cc=jmoyer@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=mingo@redhat.com \
    --cc=pauld@redhat.com \
    --cc=sandeen@redhat.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).