From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>,
Davidlohr Bueso <dave@stgolabs.net>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: Loadavg accounting error on arm64
Date: Mon, 16 Nov 2020 15:29:46 +0000 [thread overview]
Message-ID: <20201116152946.GR3371@techsingularity.net> (raw)
In-Reply-To: <20201116125803.GB3121429@hirez.programming.kicks-ass.net>
On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 16, 2020 at 01:53:55PM +0100, Peter Zijlstra wrote:
> > On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote:
> > > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote:
> > > > I'll be looking again today to see can I find a mistake in the ordering for
> > > > how sched_contributes_to_load is handled but again, the lack of knowledge
> > > > on the arm64 memory model means I'm a bit stuck and a second set of eyes
> > > > would be nice :(
> > > >
> > >
> > > This morning, it's not particularly clear what orders the visibility of
> > > sched_contributes_to_load exactly like other task fields in the schedule
> > > vs try_to_wake_up paths. I thought the rq lock would have ordered them but
> > > something is clearly off or loadavg would not be getting screwed. It could
> > > be done with an rmb and wmb (testing and hasn't blown up so far) but that's
> > > far too heavy. smp_load_acquire/smp_store_release might be sufficient
> > > on it although less clear if the arm64 gives the necessary guarantees.
> > >
> > > (This is still at the chucking out ideas as I haven't context switched
> > > back in all the memory barrier rules).
> >
> > IIRC it should be so ordered by ->on_cpu.
> >
> > We have:
> >
> > schedule()
> > prev->sched_contributes_to_load = X;
> > smp_store_release(prev->on_cpu, 0);
> >
> >
> > on the one hand, and:
>
> Ah, my bad, ttwu() itself will of course wait for !p->on_cpu before we
> even get here.
>
Sortof, it will either have called smp_load_acquire(&p->on_cpu) or
smp_cond_load_acquire(&p->on_cpu, !VAL) before hitting one of the paths
leading to ttwu_do_activate. Either way, it's covered.
> > sched_ttwu_pending()
> > if (WARN_ON_ONCE(p->on_cpu))
> > smp_cond_load_acquire(&p->on_cpu)
> >
> > ttwu_do_activate()
> > if (p->sched_contributes_to_load)
> > ...
> >
> > on the other (for the remote case, which is the only 'interesting' one).
>
But this side is interesting because I'm having trouble convincing
myself it's 100% correct for sched_contributes_to_load. The write of
prev->sched_contributes_to_load in the schedule() path has a big gap
before it hits the smp_store_release(prev->on_cpu).
On the ttwu path, we have
if (smp_load_acquire(&p->on_cpu) &&
ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
goto unlock;
ttwu_queue_wakelist queues task on the wakelist, sends IPI
and on the receiver side it calls ttwu_do_activate and reads
sched_contributes_to_load
sched_ttwu_pending() is not necessarily using the same rq lock so no
protection there. The smp_load_acquire() has just been hit but it still
leaves a gap between when sched_contributes_to_load is written and a
parallel read of sched_contributes_to_load.
So while we might be able to avoid a smp_rmb() before the read of
sched_contributes_to_load and rely on p->on_cpu ordering there,
we may still need a smp_wmb() after nr_interruptible() increments
instead of waiting until the smp_store_release() is hit while a task
is scheduling. That would be a real memory barrier on arm64 and a plain
compiler barrier on x86-64.
> Also see the "Notes on Program-Order guarantees on SMP systems."
> comment.
I did, it was the on_cpu ordering for the blocking case that had me
looking at the smp_store_release and smp_cond_load_acquire in arm64 in
the first place thinking that something in there must be breaking the
on_cpu ordering. I'm re-reading it every so often while trying to figure
out where the gap is or whether I'm imagining things.
Not fully tested but did not instantly break either
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..877eaeba45ac 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
if (signal_pending_state(prev_state, prev)) {
prev->state = TASK_RUNNING;
} else {
- prev->sched_contributes_to_load =
+ int acct_load =
(prev_state & TASK_UNINTERRUPTIBLE) &&
!(prev_state & TASK_NOLOAD) &&
!(prev->flags & PF_FROZEN);
- if (prev->sched_contributes_to_load)
+ prev->sched_contributes_to_load = acct_load;
+ if (acct_load) {
rq->nr_uninterruptible++;
+ /*
+ * Pairs with p->on_cpu ordering, either a
+ * smp_load_acquire or smp_cond_load_acquire
+ * in the ttwu path before ttwu_do_activate
+ * p->sched_contributes_to_load. It's only
+ * after the nr_interruptible update happens
+ * that the ordering is critical.
+ */
+ smp_wmb();
+ }
+
/*
* __schedule() ttwu()
* prev_state = prev->state; if (p->on_rq && ...)
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2020-11-16 15:29 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-16 9:10 Loadavg accounting error on arm64 Mel Gorman
2020-11-16 11:49 ` Mel Gorman
2020-11-16 12:00 ` Mel Gorman
2020-11-16 12:53 ` Peter Zijlstra
2020-11-16 12:58 ` Peter Zijlstra
2020-11-16 15:29 ` Mel Gorman [this message]
2020-11-16 16:42 ` Mel Gorman
2020-11-16 16:49 ` Peter Zijlstra
2020-11-16 17:24 ` Mel Gorman
2020-11-16 17:41 ` Will Deacon
2020-11-16 12:46 ` Peter Zijlstra
2020-11-16 12:58 ` Mel Gorman
2020-11-16 13:11 ` Will Deacon
2020-11-16 13:37 ` Mel Gorman
2020-11-16 14:20 ` Peter Zijlstra
2020-11-16 15:52 ` Mel Gorman
2020-11-16 16:54 ` Peter Zijlstra
2020-11-16 17:16 ` Mel Gorman
2020-11-16 19:31 ` Mel Gorman
2020-11-17 8:30 ` [PATCH] sched: Fix data-race in wakeup Peter Zijlstra
2020-11-17 9:15 ` Will Deacon
2020-11-17 9:29 ` Peter Zijlstra
2020-11-17 9:46 ` Peter Zijlstra
2020-11-17 10:36 ` Will Deacon
2020-11-17 12:52 ` Valentin Schneider
2020-11-17 15:37 ` Valentin Schneider
2020-11-17 16:13 ` Peter Zijlstra
2020-11-17 19:32 ` Valentin Schneider
2020-11-18 8:05 ` Peter Zijlstra
2020-11-18 9:51 ` Valentin Schneider
2020-11-18 13:33 ` Marco Elver
2020-11-17 9:38 ` [PATCH] sched: Fix rq->nr_iowait ordering Peter Zijlstra
2020-11-17 11:43 ` Mel Gorman
2020-11-19 9:55 ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-11-17 12:40 ` [PATCH] sched: Fix data-race in wakeup Mel Gorman
2020-11-19 9:55 ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201116152946.GR3371@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=dave@stgolabs.net \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).