All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>,
	jiangshanlai@gmail.com, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: Workqueues splat due to ending up on wrong CPU
Date: Mon, 9 Dec 2019 10:59:08 -0800	[thread overview]
Message-ID: <20191209185908.GA8470@paulmck-ThinkPad-P72> (raw)
In-Reply-To: <20191206220020.GA27511@paulmck-ThinkPad-P72>

On Fri, Dec 06, 2019 at 02:00:20PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 06, 2019 at 10:52:08AM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 05, 2019 at 06:48:05AM -0800, Paul E. McKenney wrote:
> > > On Thu, Dec 05, 2019 at 11:32:13AM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 05, 2019 at 11:29:28AM +0100, Peter Zijlstra wrote:
> > > > > On Wed, Dec 04, 2019 at 12:11:50PM -0800, Paul E. McKenney wrote:
> > > > > 
> > > > > > And the good news is that I didn't see the workqueue splat, though my
> > > > > > best guess is that I had about a 13% chance of not seeing it due to
> > > > > > random chance (and I am currently trying an idea that I hope will make
> > > > > > it more probable).  But I did get a couple of new complaints about RCU
> > > > > > being used illegally from an offline CPU.  Splats below.
> > > > > 
> > > > > Shiny!
> > > 
> > > And my attempt to speed things up did succeed, but the success was limited
> > > to finding more places where rcutorture chokes on CPUs being slow to boot.
> > > Fixing those and trying again...
> > 
> > And I finally did manage to get a clean run.  There are probably a few
> > more things that a large slow-booting hyperthreaded system can do to
> > confuse rcutorture, but it is at least down to a dull roar.
> > 
> > > > > > Your patch did rearrange the CPU-online sequence, so let's see if I
> > > > > > can piece things together...
> > > > > > 
> > > > > > RCU considers a CPU to be online at rcu_cpu_starting() time.  This is
> > > > > > called from notify_cpu_starting(), which is called from the arch-specific
> > > > > > CPU-bringup code.  Any RCU readers before rcu_cpu_starting() will trigger
> > > > > > the warning I am seeing.
> > > > > 
> > > > > Right.
> > > > > 
> > > > > > The original location of the stop_machine_unpark() was in
> > > > > > bringup_wait_for_ap(), which is called from bringup_cpu(), which is in
> > > > > > the CPUHP_BRINGUP_CPU entry of cpuhp_hp_states[].  Which, if I am not
> > > > > > too confused, is invoked by some CPU other than the to-be-incoming CPU.
> > > > > 
> > > > > Correct.
> > > > > 
> > > > > > The new location of the stop_machine_unpark() is in cpuhp_online_idle(),
> > > > > > which is called from cpu_startup_entry(), which is invoked from
> > > > > > the arch-specific bringup code that runs on the incoming CPU.
> > > > > 
> > > > > The new place is the final piece of bringup, it is right before where
> > > > > the freshly woken CPU will drop into the idle loop and start scheduling
> > > > > (for the first time).
> > > > > 
> > > > > > Which
> > > > > > is the same code that invokes notify_cpu_starting(), so we need
> > > > > > notify_cpu_starting() to be invoked before cpu_startup_entry().
> > > > > 
> > > > > Right, that is right before we run what used to be the CPU_STARTING
> > > > > notifiers. This is in fact (on x86) before the CPU is marked
> > > > > cpu_online(). It has to be before cpu_startup_entry(), before this is
> > > > > ran with IRQs disabled, while cpu_startup_entry() demands IRQs are
> > > > > enabled.
> > > > > 
> > > > > > The order is not immediately obvious on IA64.  But it looks like
> > > > > > everything else does it in the required order, so I am a bit confused
> > > > > > about this.
> > > > > 
> > > > > That makes two of us, afaict we have RCU up and running when we get to
> > > > > the idle loop.
> > > > 
> > > > Or did we need rcutree_online_cpu() to have ran? Because that is ran
> > > > much later than this...
> > > 
> > > No, rcu_cpu_starting() does the trick.  So I remain confused.
> > > 
> > > My thought is to add some printk()s or tracing to rcu_cpu_starting()
> > > and its counterpart, rcu_report_dead().  But is there a better way?
> > 
> > And the answer is...
> > 
> > This splat happens even without your fix!
> > 
> > Which goes a long way to explaining why neither of us could figure out
> > how your fix could have caused it.  It apparently was the increased
> > stress required to reproduce quickly rather than your fix that made it
> > happen more frequently.  Though there are few enough occurrences that
> > it might just be random chance.
> > 
> > Thoughts?
> 
> I now have 12 of these, and my best guess is that this is happening
> from kvm_guest_cpu_init() when it prints "KVM setup async PF for cpu",
> given that this message is always either the line immediately
> following the splat or the one after that.  So let's see if I can
> connect the dots between kvm_guest_cpu_init() and start_secondary().
> The "? slow_virt_to_phys()" makes sense, as it is invoked by
> kvm_guest_cpu_init() just before the suspect printk().
> 
> kvm_guest_cpu_init() is invoked by kvm_smp_prepare_boot_cpu(),
> kvm_cpu_online(), and kvm_guest_init().  Since the boot CPU came
> up long ago and since rcutorture CPU hotplug should be on the job
> at the time of all of these splats, I am guessing kvm_cpu_online().
> But kvm_cpu_online() is invoked by kvm_guest_init(), so all non-boot-CPU
> roads lead to kvm_guest_init() anyway.
> 
> But kvm_guest_init() is an postcore_initcall() function.
> It is also placed in x86_hyper_kvm.init.guest_late_init().
> The postcore_initcall() looks unconditional, but does not appear in
> dmesg.  Besides which, at the time of the splat, boot is working on
> late_initcall()s rather than postcore_initcalls().  So let's look at
> x86_hyper_kvm.init.guest_late_init(), which is invoked in setup_arch(),
> which is in turn invoked way early in boot, before rcu_init().
> 
> So neither seem to make much sense here.
> 
> On the other hand, rcutorture's exercising of CPU hotplug before init
> has been spawned might not make the most sense, either.  So I will queue
> a patch that makes rcutorture hold off on the hotplug until boot is a
> bit further along.
> 
> And then hammer this a bit over the weekend, this time with Peter's
> alleged fix.  ;-)

And it survived!  ;-)

Peter, could I please have your Signed-off-by?  Or take my Tested-by if
you would prefer to send it up some other way.

							Thanx, Paul

  reply	other threads:[~2019-12-09 18:59 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-25 23:03 Workqueues splat due to ending up on wrong CPU Paul E. McKenney
2019-11-26 18:33 ` Tejun Heo
2019-11-26 22:05   ` Paul E. McKenney
2019-11-27 15:50     ` Paul E. McKenney
2019-11-28 16:18       ` Paul E. McKenney
2019-11-29 15:58         ` Paul E. McKenney
2019-12-02  1:55           ` Paul E. McKenney
2019-12-02 20:13             ` Tejun Heo
2019-12-02 23:39               ` Paul E. McKenney
2019-12-03 10:00                 ` Peter Zijlstra
2019-12-03 17:45                   ` Paul E. McKenney
2019-12-03 18:13                     ` Tejun Heo
2019-12-03  9:55               ` Peter Zijlstra
2019-12-03 10:06                 ` Peter Zijlstra
2019-12-03 15:42                 ` Tejun Heo
2019-12-03 16:04                   ` Paul E. McKenney
2019-12-04 20:11                 ` Paul E. McKenney
2019-12-05 10:29                   ` Peter Zijlstra
2019-12-05 10:32                     ` Peter Zijlstra
2019-12-05 14:48                       ` Paul E. McKenney
2019-12-06  3:19                         ` Paul E. McKenney
2019-12-06 18:52                         ` Paul E. McKenney
2019-12-06 22:00                           ` Paul E. McKenney
2019-12-09 18:59                             ` Paul E. McKenney [this message]
2019-12-10  9:08                               ` Peter Zijlstra
2019-12-10 22:56                                 ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191209185908.GA8470@paulmck-ThinkPad-P72 \
    --to=paulmck@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.