All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Will Deacon <will@kernel.org>,
	Davidlohr Bueso <dave@stgolabs.net>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: Loadavg accounting error on arm64
Date: Mon, 16 Nov 2020 17:49:28 +0100	[thread overview]
Message-ID: <20201116164928.GF3121392@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20201116152946.GR3371@techsingularity.net>

On Mon, Nov 16, 2020 at 03:29:46PM +0000, Mel Gorman wrote:
> On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote:

> > > 	sched_ttwu_pending()
> > > 		if (WARN_ON_ONCE(p->on_cpu))
> > > 			smp_cond_load_acquire(&p->on_cpu)
> > > 
> > > 		ttwu_do_activate()
> > > 			if (p->sched_contributes_to_load)
> > > 				...
> > > 
> > > on the other (for the remote case, which is the only 'interesting' one).
> > 
> 
> But this side is interesting because I'm having trouble convincing
> myself it's 100% correct for sched_contributes_to_load. The write of
> prev->sched_contributes_to_load in the schedule() path has a big gap
> before it hits the smp_store_release(prev->on_cpu).
> 
> On the ttwu path, we have
> 
>         if (smp_load_acquire(&p->on_cpu) &&
>             ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
>                 goto unlock;
> 
> 	ttwu_queue_wakelist queues task on the wakelist, sends IPI
> 	and on the receiver side it calls ttwu_do_activate and reads
> 	sched_contributes_to_load
> 
> sched_ttwu_pending() is not necessarily using the same rq lock so no
> protection there. The smp_load_acquire() has just been hit but it still
> leaves a gap between when sched_contributes_to_load is written and a
> parallel read of sched_contributes_to_load.
> 
> So while we might be able to avoid a smp_rmb() before the read of
> sched_contributes_to_load and rely on p->on_cpu ordering there,
> we may still need a smp_wmb() after nr_interruptible() increments
> instead of waiting until the smp_store_release() is hit while a task
> is scheduling. That would be a real memory barrier on arm64 and a plain
> compiler barrier on x86-64.

I'm mighty confused by your words here; and the patch below. What actual
scenario are you worried about?

If we take the WF_ON_CPU path, we IPI the CPU the task is ->on_cpu on.
So the IPI lands after the schedule() that clears ->on_cpu on the very
same CPU.

> 
> > Also see the "Notes on Program-Order guarantees on SMP systems."
> > comment.
> 
> I did, it was the on_cpu ordering for the blocking case that had me
> looking at the smp_store_release and smp_cond_load_acquire in arm64 in
> the first place thinking that something in there must be breaking the
> on_cpu ordering. I'm re-reading it every so often while trying to figure
> out where the gap is or whether I'm imagining things.
> 
> Not fully tested but did not instantly break either
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d2003a7d5ab5..877eaeba45ac 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
>  		if (signal_pending_state(prev_state, prev)) {
>  			prev->state = TASK_RUNNING;
>  		} else {
> -			prev->sched_contributes_to_load =
> +			int acct_load =
>  				(prev_state & TASK_UNINTERRUPTIBLE) &&
>  				!(prev_state & TASK_NOLOAD) &&
>  				!(prev->flags & PF_FROZEN);
>  
> -			if (prev->sched_contributes_to_load)
> +			prev->sched_contributes_to_load = acct_load;
> +			if (acct_load) {
>  				rq->nr_uninterruptible++;
>  
> +				/*
> +				 * Pairs with p->on_cpu ordering, either a
> +				 * smp_load_acquire or smp_cond_load_acquire
> +				 * in the ttwu path before ttwu_do_activate
> +				 * p->sched_contributes_to_load. It's only
> +				 * after the nr_interruptible update happens
> +				 * that the ordering is critical.
> +				 */
> +				smp_wmb();
> +			}

Sorry, I can't follow, at all.

WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Davidlohr Bueso <dave@stgolabs.net>,
	Will Deacon <will@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: Loadavg accounting error on arm64
Date: Mon, 16 Nov 2020 17:49:28 +0100	[thread overview]
Message-ID: <20201116164928.GF3121392@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20201116152946.GR3371@techsingularity.net>

On Mon, Nov 16, 2020 at 03:29:46PM +0000, Mel Gorman wrote:
> On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote:

> > > 	sched_ttwu_pending()
> > > 		if (WARN_ON_ONCE(p->on_cpu))
> > > 			smp_cond_load_acquire(&p->on_cpu)
> > > 
> > > 		ttwu_do_activate()
> > > 			if (p->sched_contributes_to_load)
> > > 				...
> > > 
> > > on the other (for the remote case, which is the only 'interesting' one).
> > 
> 
> But this side is interesting because I'm having trouble convincing
> myself it's 100% correct for sched_contributes_to_load. The write of
> prev->sched_contributes_to_load in the schedule() path has a big gap
> before it hits the smp_store_release(prev->on_cpu).
> 
> On the ttwu path, we have
> 
>         if (smp_load_acquire(&p->on_cpu) &&
>             ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
>                 goto unlock;
> 
> 	ttwu_queue_wakelist queues task on the wakelist, sends IPI
> 	and on the receiver side it calls ttwu_do_activate and reads
> 	sched_contributes_to_load
> 
> sched_ttwu_pending() is not necessarily using the same rq lock so no
> protection there. The smp_load_acquire() has just been hit but it still
> leaves a gap between when sched_contributes_to_load is written and a
> parallel read of sched_contributes_to_load.
> 
> So while we might be able to avoid a smp_rmb() before the read of
> sched_contributes_to_load and rely on p->on_cpu ordering there,
> we may still need a smp_wmb() after nr_interruptible() increments
> instead of waiting until the smp_store_release() is hit while a task
> is scheduling. That would be a real memory barrier on arm64 and a plain
> compiler barrier on x86-64.

I'm mighty confused by your words here; and the patch below. What actual
scenario are you worried about?

If we take the WF_ON_CPU path, we IPI the CPU the task is ->on_cpu on.
So the IPI lands after the schedule() that clears ->on_cpu on the very
same CPU.

> 
> > Also see the "Notes on Program-Order guarantees on SMP systems."
> > comment.
> 
> I did, it was the on_cpu ordering for the blocking case that had me
> looking at the smp_store_release and smp_cond_load_acquire in arm64 in
> the first place thinking that something in there must be breaking the
> on_cpu ordering. I'm re-reading it every so often while trying to figure
> out where the gap is or whether I'm imagining things.
> 
> Not fully tested but did not instantly break either
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d2003a7d5ab5..877eaeba45ac 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
>  		if (signal_pending_state(prev_state, prev)) {
>  			prev->state = TASK_RUNNING;
>  		} else {
> -			prev->sched_contributes_to_load =
> +			int acct_load =
>  				(prev_state & TASK_UNINTERRUPTIBLE) &&
>  				!(prev_state & TASK_NOLOAD) &&
>  				!(prev->flags & PF_FROZEN);
>  
> -			if (prev->sched_contributes_to_load)
> +			prev->sched_contributes_to_load = acct_load;
> +			if (acct_load) {
>  				rq->nr_uninterruptible++;
>  
> +				/*
> +				 * Pairs with p->on_cpu ordering, either a
> +				 * smp_load_acquire or smp_cond_load_acquire
> +				 * in the ttwu path before ttwu_do_activate
> +				 * p->sched_contributes_to_load. It's only
> +				 * after the nr_interruptible update happens
> +				 * that the ordering is critical.
> +				 */
> +				smp_wmb();
> +			}

Sorry, I can't follow, at all.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-11-16 16:49 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-16  9:10 Loadavg accounting error on arm64 Mel Gorman
2020-11-16  9:10 ` Mel Gorman
2020-11-16 11:49 ` Mel Gorman
2020-11-16 11:49   ` Mel Gorman
2020-11-16 12:00   ` Mel Gorman
2020-11-16 12:00     ` Mel Gorman
2020-11-16 12:53   ` Peter Zijlstra
2020-11-16 12:53     ` Peter Zijlstra
2020-11-16 12:58     ` Peter Zijlstra
2020-11-16 12:58       ` Peter Zijlstra
2020-11-16 15:29       ` Mel Gorman
2020-11-16 15:29         ` Mel Gorman
2020-11-16 16:42         ` Mel Gorman
2020-11-16 16:42           ` Mel Gorman
2020-11-16 16:49         ` Peter Zijlstra [this message]
2020-11-16 16:49           ` Peter Zijlstra
2020-11-16 17:24           ` Mel Gorman
2020-11-16 17:24             ` Mel Gorman
2020-11-16 17:41             ` Will Deacon
2020-11-16 17:41               ` Will Deacon
2020-11-16 12:46 ` Peter Zijlstra
2020-11-16 12:46   ` Peter Zijlstra
2020-11-16 12:58   ` Mel Gorman
2020-11-16 12:58     ` Mel Gorman
2020-11-16 13:11 ` Will Deacon
2020-11-16 13:11   ` Will Deacon
2020-11-16 13:37   ` Mel Gorman
2020-11-16 13:37     ` Mel Gorman
2020-11-16 14:20     ` Peter Zijlstra
2020-11-16 14:20       ` Peter Zijlstra
2020-11-16 15:52       ` Mel Gorman
2020-11-16 15:52         ` Mel Gorman
2020-11-16 16:54         ` Peter Zijlstra
2020-11-16 16:54           ` Peter Zijlstra
2020-11-16 17:16           ` Mel Gorman
2020-11-16 17:16             ` Mel Gorman
2020-11-16 19:31       ` Mel Gorman
2020-11-16 19:31         ` Mel Gorman
2020-11-17  8:30         ` [PATCH] sched: Fix data-race in wakeup Peter Zijlstra
2020-11-17  8:30           ` Peter Zijlstra
2020-11-17  9:15           ` Will Deacon
2020-11-17  9:15             ` Will Deacon
2020-11-17  9:29             ` Peter Zijlstra
2020-11-17  9:29               ` Peter Zijlstra
2020-11-17  9:46               ` Peter Zijlstra
2020-11-17  9:46                 ` Peter Zijlstra
2020-11-17 10:36                 ` Will Deacon
2020-11-17 10:36                   ` Will Deacon
2020-11-17 12:52                 ` Valentin Schneider
2020-11-17 12:52                   ` Valentin Schneider
2020-11-17 15:37                   ` Valentin Schneider
2020-11-17 15:37                     ` Valentin Schneider
2020-11-17 16:13                     ` Peter Zijlstra
2020-11-17 16:13                       ` Peter Zijlstra
2020-11-17 19:32                       ` Valentin Schneider
2020-11-17 19:32                         ` Valentin Schneider
2020-11-18  8:05                         ` Peter Zijlstra
2020-11-18  8:05                           ` Peter Zijlstra
2020-11-18  9:51                           ` Valentin Schneider
2020-11-18  9:51                             ` Valentin Schneider
2020-11-18 13:33               ` Marco Elver
2020-11-18 13:33                 ` Marco Elver
2020-11-17  9:38           ` [PATCH] sched: Fix rq->nr_iowait ordering Peter Zijlstra
2020-11-17  9:38             ` Peter Zijlstra
2020-11-17 11:43             ` Mel Gorman
2020-11-17 11:43               ` Mel Gorman
2020-11-19  9:55             ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-11-17 12:40           ` [PATCH] sched: Fix data-race in wakeup Mel Gorman
2020-11-17 12:40             ` Mel Gorman
2020-11-19  9:55           ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201116164928.GF3121392@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=dave@stgolabs.net \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.