All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>,
	Davidlohr Bueso <dave@stgolabs.net>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: Loadavg accounting error on arm64
Date: Mon, 16 Nov 2020 17:16:41 +0000	[thread overview]
Message-ID: <20201116171641.GU3371@techsingularity.net> (raw)
In-Reply-To: <20201116165415.GG3121392@hirez.programming.kicks-ass.net>

On Mon, Nov 16, 2020 at 05:54:15PM +0100, Peter Zijlstra wrote:
> > > And then the stores of X and Y clobber one another.. Hummph, seems
> > > reasonable. One quick thing to test would be something like this:
> > > 
> > > 
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index 7abbdd7f3884..9844e541c94c 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -775,7 +775,9 @@ struct task_struct {
> > >  	unsigned			sched_reset_on_fork:1;
> > >  	unsigned			sched_contributes_to_load:1;
> > >  	unsigned			sched_migrated:1;
> > > +	unsigned			:0;
> > >  	unsigned			sched_remote_wakeup:1;
> > > +	unsigned			:0;
> > >  #ifdef CONFIG_PSI
> > >  	unsigned			sched_psi_wake_requeue:1;
> > >  #endif
> > 
> > I'll test this after the smp_wmb() test completes. While a clobbering may
> > be the issue, I also think the gap between the rq->nr_uninterruptible++
> > and smp_store_release(prev->on_cpu, 0) is relevant and a better candidate.
> 
> I really don't understand what you wrote in that email...

Sorry :(. I tried writing a changelog showing where I think the race
might be. I'll queue up your patch that is potentially sched_migrated
and sched_remote_wakeup being clobbered.

--8<--
sched: Fix loadavg accounting race on arm64

An internal bug report filed against a 5.8 and 5.9 kernel that loadavg
was "exploding" on arm64 on a machines acting as a build servers. It
happened on at least two different arm64 variants. That setup is complex to
replicate but can be reproduced by running hackbench-process-pipes while
heavily overcommitting a machine with 96 logical CPUs and then checking
if loadavg drops afterwards. With an MMTests clone, reproduce it as follows

./run-mmtests.sh --config configs/config-workload-hackbench-process-pipes --no-monitor testrun; \
for i in `seq 1 60`; do cat /proc/loadavg; sleep 60; done

The reproduction case simply hammers the case where a task can be
descheduling while also being woken by another task at the same time.
After the test completes, load avg should reach 0 within a few minutes.

Commit dbfb089d360b ("sched: Fix loadavg accounting race") fixed a loadavg
accounting race in the generic case. Later it was documented why the ordering
of when p->sched_contributes_to_load is read/updated relative to p->on_cpu.
This is critical when a task is descheduling at the same time it is being
activated on another CPU. While the load/stores happen under the RQ lock,
the RQ lock on its own does not give any guarantees on the task state.

The problem appears to be because the schedule and wakeup paths rely
on being ordered by ->on_cpu for some fields as documented in core.c
under "Notes on Program-Order guarantees on SMP systems". However,
this can happen

CPU 0					CPU 1			CPU 2

__schedule()
  prev->sched_contributes_to_load = 1
  rq->nr_uninterruptible++;
  rq_unlock_irq
					try_to_wake_up
					  smp_load_acquire(&p->on_cpu) && ttwu_queue_wakelist(p) == true
					    ttwu_queue_wakelist
					      ttwu_queue_cond (true)
					      __ttwu_queue_wakelist

								sched_ttwu_pending
								  ttwu_do_activate
								    if (p->sched_contributes_to_load) (wrong value observed, load drifts)
finish_task
  smp_store_release(X->on_cpu, 0)

There is a gap between when rq->nr_uninterruptible is written
to before p->on_cpu is updated with smp_store_release(). During
that window, a parallel waker may observe the incorrect value for
p->sched_contributes_to_load and fail to decrement rq->nr_uninterruptible
and the loadavg starts drifting.

This patch adds a write barrier after nr_uninterruptible is updated
with the matching read barrier done when reading p->on_cpu in the ttwu
path. With the patch applied, the load averages taken every minute after
the hackbench test case completes is

950.21 977.17 990.69 1/853 2117
349.00 799.32 928.69 1/859 2439
128.18 653.85 870.56 1/861 2736
47.08 534.84 816.08 1/860 3029
17.29 437.50 765.00 1/865 3357
6.35 357.87 717.13 1/865 3653
2.33 292.74 672.24 1/861 3709
0.85 239.46 630.17 1/859 3711
0.31 195.87 590.73 1/857 3713
0.11 160.22 553.76 1/853 3715

Without the patch, the load average stabilised at 244 on an otherwise
idle machine.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Cc: stable@vger.kernel.org # v5.8+

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..877eaeba45ac 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
 		if (signal_pending_state(prev_state, prev)) {
 			prev->state = TASK_RUNNING;
 		} else {
-			prev->sched_contributes_to_load =
+			int acct_load =
 				(prev_state & TASK_UNINTERRUPTIBLE) &&
 				!(prev_state & TASK_NOLOAD) &&
 				!(prev->flags & PF_FROZEN);
 
-			if (prev->sched_contributes_to_load)
+			prev->sched_contributes_to_load = acct_load;
+			if (acct_load) {
 				rq->nr_uninterruptible++;
 
+				/*
+				 * Pairs with p->on_cpu ordering, either a
+				 * smp_load_acquire or smp_cond_load_acquire
+				 * in the ttwu path before ttwu_do_activate
+				 * p->sched_contributes_to_load. It's only
+				 * after the nr_interruptible update happens
+				 * that the ordering is critical.
+				 */
+				smp_wmb();
+			}
+
 			/*
 			 * __schedule()			ttwu()
 			 *   prev_state = prev->state;    if (p->on_rq && ...)

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>,
	Will Deacon <will@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: Loadavg accounting error on arm64
Date: Mon, 16 Nov 2020 17:16:41 +0000	[thread overview]
Message-ID: <20201116171641.GU3371@techsingularity.net> (raw)
In-Reply-To: <20201116165415.GG3121392@hirez.programming.kicks-ass.net>

On Mon, Nov 16, 2020 at 05:54:15PM +0100, Peter Zijlstra wrote:
> > > And then the stores of X and Y clobber one another.. Hummph, seems
> > > reasonable. One quick thing to test would be something like this:
> > > 
> > > 
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index 7abbdd7f3884..9844e541c94c 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -775,7 +775,9 @@ struct task_struct {
> > >  	unsigned			sched_reset_on_fork:1;
> > >  	unsigned			sched_contributes_to_load:1;
> > >  	unsigned			sched_migrated:1;
> > > +	unsigned			:0;
> > >  	unsigned			sched_remote_wakeup:1;
> > > +	unsigned			:0;
> > >  #ifdef CONFIG_PSI
> > >  	unsigned			sched_psi_wake_requeue:1;
> > >  #endif
> > 
> > I'll test this after the smp_wmb() test completes. While a clobbering may
> > be the issue, I also think the gap between the rq->nr_uninterruptible++
> > and smp_store_release(prev->on_cpu, 0) is relevant and a better candidate.
> 
> I really don't understand what you wrote in that email...

Sorry :(. I tried writing a changelog showing where I think the race
might be. I'll queue up your patch that is potentially sched_migrated
and sched_remote_wakeup being clobbered.

--8<--
sched: Fix loadavg accounting race on arm64

An internal bug report filed against a 5.8 and 5.9 kernel that loadavg
was "exploding" on arm64 on a machines acting as a build servers. It
happened on at least two different arm64 variants. That setup is complex to
replicate but can be reproduced by running hackbench-process-pipes while
heavily overcommitting a machine with 96 logical CPUs and then checking
if loadavg drops afterwards. With an MMTests clone, reproduce it as follows

./run-mmtests.sh --config configs/config-workload-hackbench-process-pipes --no-monitor testrun; \
for i in `seq 1 60`; do cat /proc/loadavg; sleep 60; done

The reproduction case simply hammers the case where a task can be
descheduling while also being woken by another task at the same time.
After the test completes, load avg should reach 0 within a few minutes.

Commit dbfb089d360b ("sched: Fix loadavg accounting race") fixed a loadavg
accounting race in the generic case. Later it was documented why the ordering
of when p->sched_contributes_to_load is read/updated relative to p->on_cpu.
This is critical when a task is descheduling at the same time it is being
activated on another CPU. While the load/stores happen under the RQ lock,
the RQ lock on its own does not give any guarantees on the task state.

The problem appears to be because the schedule and wakeup paths rely
on being ordered by ->on_cpu for some fields as documented in core.c
under "Notes on Program-Order guarantees on SMP systems". However,
this can happen

CPU 0					CPU 1			CPU 2

__schedule()
  prev->sched_contributes_to_load = 1
  rq->nr_uninterruptible++;
  rq_unlock_irq
					try_to_wake_up
					  smp_load_acquire(&p->on_cpu) && ttwu_queue_wakelist(p) == true
					    ttwu_queue_wakelist
					      ttwu_queue_cond (true)
					      __ttwu_queue_wakelist

								sched_ttwu_pending
								  ttwu_do_activate
								    if (p->sched_contributes_to_load) (wrong value observed, load drifts)
finish_task
  smp_store_release(X->on_cpu, 0)

There is a gap between when rq->nr_uninterruptible is written
to before p->on_cpu is updated with smp_store_release(). During
that window, a parallel waker may observe the incorrect value for
p->sched_contributes_to_load and fail to decrement rq->nr_uninterruptible
and the loadavg starts drifting.

This patch adds a write barrier after nr_uninterruptible is updated
with the matching read barrier done when reading p->on_cpu in the ttwu
path. With the patch applied, the load averages taken every minute after
the hackbench test case completes is

950.21 977.17 990.69 1/853 2117
349.00 799.32 928.69 1/859 2439
128.18 653.85 870.56 1/861 2736
47.08 534.84 816.08 1/860 3029
17.29 437.50 765.00 1/865 3357
6.35 357.87 717.13 1/865 3653
2.33 292.74 672.24 1/861 3709
0.85 239.46 630.17 1/859 3711
0.31 195.87 590.73 1/857 3713
0.11 160.22 553.76 1/853 3715

Without the patch, the load average stabilised at 244 on an otherwise
idle machine.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Cc: stable@vger.kernel.org # v5.8+

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..877eaeba45ac 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
 		if (signal_pending_state(prev_state, prev)) {
 			prev->state = TASK_RUNNING;
 		} else {
-			prev->sched_contributes_to_load =
+			int acct_load =
 				(prev_state & TASK_UNINTERRUPTIBLE) &&
 				!(prev_state & TASK_NOLOAD) &&
 				!(prev->flags & PF_FROZEN);
 
-			if (prev->sched_contributes_to_load)
+			prev->sched_contributes_to_load = acct_load;
+			if (acct_load) {
 				rq->nr_uninterruptible++;
 
+				/*
+				 * Pairs with p->on_cpu ordering, either a
+				 * smp_load_acquire or smp_cond_load_acquire
+				 * in the ttwu path before ttwu_do_activate
+				 * p->sched_contributes_to_load. It's only
+				 * after the nr_interruptible update happens
+				 * that the ordering is critical.
+				 */
+				smp_wmb();
+			}
+
 			/*
 			 * __schedule()			ttwu()
 			 *   prev_state = prev->state;    if (p->on_rq && ...)

-- 
Mel Gorman
SUSE Labs

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-11-16 17:17 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-16  9:10 Loadavg accounting error on arm64 Mel Gorman
2020-11-16  9:10 ` Mel Gorman
2020-11-16 11:49 ` Mel Gorman
2020-11-16 11:49   ` Mel Gorman
2020-11-16 12:00   ` Mel Gorman
2020-11-16 12:00     ` Mel Gorman
2020-11-16 12:53   ` Peter Zijlstra
2020-11-16 12:53     ` Peter Zijlstra
2020-11-16 12:58     ` Peter Zijlstra
2020-11-16 12:58       ` Peter Zijlstra
2020-11-16 15:29       ` Mel Gorman
2020-11-16 15:29         ` Mel Gorman
2020-11-16 16:42         ` Mel Gorman
2020-11-16 16:42           ` Mel Gorman
2020-11-16 16:49         ` Peter Zijlstra
2020-11-16 16:49           ` Peter Zijlstra
2020-11-16 17:24           ` Mel Gorman
2020-11-16 17:24             ` Mel Gorman
2020-11-16 17:41             ` Will Deacon
2020-11-16 17:41               ` Will Deacon
2020-11-16 12:46 ` Peter Zijlstra
2020-11-16 12:46   ` Peter Zijlstra
2020-11-16 12:58   ` Mel Gorman
2020-11-16 12:58     ` Mel Gorman
2020-11-16 13:11 ` Will Deacon
2020-11-16 13:11   ` Will Deacon
2020-11-16 13:37   ` Mel Gorman
2020-11-16 13:37     ` Mel Gorman
2020-11-16 14:20     ` Peter Zijlstra
2020-11-16 14:20       ` Peter Zijlstra
2020-11-16 15:52       ` Mel Gorman
2020-11-16 15:52         ` Mel Gorman
2020-11-16 16:54         ` Peter Zijlstra
2020-11-16 16:54           ` Peter Zijlstra
2020-11-16 17:16           ` Mel Gorman [this message]
2020-11-16 17:16             ` Mel Gorman
2020-11-16 19:31       ` Mel Gorman
2020-11-16 19:31         ` Mel Gorman
2020-11-17  8:30         ` [PATCH] sched: Fix data-race in wakeup Peter Zijlstra
2020-11-17  8:30           ` Peter Zijlstra
2020-11-17  9:15           ` Will Deacon
2020-11-17  9:15             ` Will Deacon
2020-11-17  9:29             ` Peter Zijlstra
2020-11-17  9:29               ` Peter Zijlstra
2020-11-17  9:46               ` Peter Zijlstra
2020-11-17  9:46                 ` Peter Zijlstra
2020-11-17 10:36                 ` Will Deacon
2020-11-17 10:36                   ` Will Deacon
2020-11-17 12:52                 ` Valentin Schneider
2020-11-17 12:52                   ` Valentin Schneider
2020-11-17 15:37                   ` Valentin Schneider
2020-11-17 15:37                     ` Valentin Schneider
2020-11-17 16:13                     ` Peter Zijlstra
2020-11-17 16:13                       ` Peter Zijlstra
2020-11-17 19:32                       ` Valentin Schneider
2020-11-17 19:32                         ` Valentin Schneider
2020-11-18  8:05                         ` Peter Zijlstra
2020-11-18  8:05                           ` Peter Zijlstra
2020-11-18  9:51                           ` Valentin Schneider
2020-11-18  9:51                             ` Valentin Schneider
2020-11-18 13:33               ` Marco Elver
2020-11-18 13:33                 ` Marco Elver
2020-11-17  9:38           ` [PATCH] sched: Fix rq->nr_iowait ordering Peter Zijlstra
2020-11-17  9:38             ` Peter Zijlstra
2020-11-17 11:43             ` Mel Gorman
2020-11-17 11:43               ` Mel Gorman
2020-11-19  9:55             ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-11-17 12:40           ` [PATCH] sched: Fix data-race in wakeup Mel Gorman
2020-11-17 12:40             ` Mel Gorman
2020-11-19  9:55           ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201116171641.GU3371@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=dave@stgolabs.net \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.