From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752126AbcCVVTl (ORCPT ); Tue, 22 Mar 2016 17:19:41 -0400 Received: from e17.ny.us.ibm.com ([129.33.205.207]:35948 "EHLO e17.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751286AbcCVVTj (ORCPT ); Tue, 22 Mar 2016 17:19:39 -0400 X-IBM-Helo: d01dlp03.pok.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Tue, 22 Mar 2016 14:19:39 -0700 From: "Paul E. McKenney" To: "Chatre, Reinette" Cc: Jacob Pan , Josh Triplett , Ross Green , Mathieu Desnoyers , John Stultz , Thomas Gleixner , Peter Zijlstra , lkml , Ingo Molnar , Lai Jiangshan , "dipankar@in.ibm.com" , Andrew Morton , rostedt , David Howells , Eric Dumazet , Darren Hart , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Oleg Nesterov , pranith kumar Subject: Re: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17 Message-ID: <20160322211939.GT4287@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20160226005638.GV3522@linux.vnet.ibm.com> <20160318210011.GA571@cloud> <20160318235641.GH4287@linux.vnet.ibm.com> <20160321092230.75f23fa9@yairi> <20160321172616.GU4287@linux.vnet.ibm.com> <0D818C7A2259ED42912C1E04120FDE26712E27C8@ORSMSX111.amr.corp.intel.com> <20160322174011.GM4287@linux.vnet.ibm.com> <0D818C7A2259ED42912C1E04120FDE26712E3DF3@ORSMSX111.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0D818C7A2259ED42912C1E04120FDE26712E3DF3@ORSMSX111.amr.corp.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16032221-0041-0000-0000-000003AAF5D5 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 22, 2016 at 09:04:47PM +0000, Chatre, Reinette wrote: > Hi Paul, > > On 2016-03-22, Paul E. McKenney wrote: > > On Tue, Mar 22, 2016 at 04:35:32PM +0000, Chatre, Reinette wrote: > >> On 2016-03-21, Paul E. McKenney wrote: > >>> On Mon, Mar 21, 2016 at 09:22:30AM -0700, Jacob Pan wrote: > >>>> On Fri, 18 Mar 2016 16:56:41 -0700 > >>>> "Paul E. McKenney" wrote: > >>>>> On Fri, Mar 18, 2016 at 02:00:11PM -0700, Josh Triplett wrote: > >>>>>> On Thu, Feb 25, 2016 at 04:56:38PM -0800, Paul E. McKenney wrote: > >>> > >>> [ . . . ] > >>> > >>>>>> We're seeing a similar stall (~60 seconds) on an x86 development > >>>>>> system here. Any luck tracking down the cause of this? If not, any > >>>>>> suggestions for traces that might be helpful? > >>>>> > >>>>> The dmesg containing the stall, the kernel version, and the .config > >>>>> would be helpful! Working on a torture test specific to this bug... > > > > And thank you for the .config. Your kenrle version looks to be 4.5.0. > > > >>>> +Reinette, she has the system that can reproduce the issue. I > >>>> believe she is having some other problems with it at the moment. But > >>>> the .config should be available. Version is v4.5. > >>> > >>> A couple of additional questions: > >>> > >>> 1. Is the test running on bare metal or virtualized? If the > >>> latter, what is the host? > >> > >> Bare metal. > > > > OK, you are ahead of me. Mine is virtualized. > > > >>> 2. Does the workload involve CPU hotplug? > >> > >> No. > > > > Again, you are ahead of me. Mine makes extremely heavy use of CPU hotplug. > > > >>> 3. Are you seeing things like this in dmesg? > >>> > >>> "rcu_preempt kthread starved for 21033 jiffies" > >>> "rcu_sched kthread starved for 32103 jiffies" > >>> "rcu_bh kthread starved for 84031 jiffies" > >>> > >>> If not, you are probably facing some other bug, and should > >>> proceed debugging as described in Documentation/RCU/stallwarn.txt. > >> > >> Below is a sample of what I see as captured with v4.5. The kernel > >> configuration is attached. > >> > >> [ 135.456197] INFO: rcu_preempt detected stalls on CPUs/tasks: [ > >> 135.457729] 3-...: (0 ticks this GP) idle=722/0/0 softirq=5532/5532 > >> fqs=0 [ 135.459604] (detected by 2, t=60004 jiffies, g=2105, c=2104, > >> q=165) [ 135.461318] Task dump for CPU 3: [ 135.461321] swapper/3 > >> R running task 0 0 1 0x00200000 [ 135.461325] > >> 00000078560040e5 ffff88017846fed0 ffffffff818af2cc ffff880100000000 [ > >> 135.461330] 0000000600000003 ffff880178470000 ffff880072f32200 > >> ffffffff822dcec0 [ 135.461334] ffff88017846c000 ffff88017846c000 > >> ffff88017846fee0 ffffffff818af517 [ 135.461338] Call Trace: [ > >> 135.461345] [] ? cpuidle_enter_state+0xfc/0x310 [ > >> 135.461349] [] ? cpuidle_enter+0x17/0x20 [ > >> 135.461353] [] ? call_cpuidle+0x2a/0x40 [ > >> 135.461355] [] ? cpu_startup_entry+0x28d/0x360 [ > >> 135.461360] [] ? start_secondary+0x114/0x140 [ > >> 135.461365] rcu_preempt kthread starved for 60004 jiffies! g2105 c2104 > > f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 > > > > And yes, it looks like you are seeing the same bug that I am tracing. > > > > The kthread is blocked on a schedule_timeout_interruptible(). Given > > default configuration, this would have a three-jiffy timeout. > > > > You set CONFIG_RCU_CPU_STALL_TIMEOUT=60, which matches the 60004 jiffies > > above. Is that value due to a distro setting or something? Mainline > > uses CONFIG_RCU_CPU_STALL_TIMEOUT=21. > > Indeed ... this value originated from a Fedora configuration. OK. Setting it shorter might (or might not) make it reproduce more quickly. This can be set at boot time via rcupdate.rcu_cpu_stall_timeout. Or at compile time via CONFIG_RCU_CPU_STALL_TIMEOUT. > >> [ 135.463965] rcu_preempt S ffff88017844fd68 0 7 2 > >> 0x00000000 [ 135.463969] ffff88017844fd68 ffff88017dd8cc80 > >> ffff880177ff0000 ffff880178443b80 [ 135.463973] ffff880178450000 > >> ffff88017844fda0 ffff88017dd8cc80 ffff88017dd8cc80 [ 135.463977] > >> 0000000000000003 ffff88017844fd80 ffffffff81ab031f 0000000100031504 [ > >> 135.463981] Call Trace: [ 135.463986] [] > >> schedule+0x3f/0xa0 [ 135.463989] [] > >> schedule_timeout+0x127/0x270 [ 135.463993] [] ? > >> detach_if_pending+0x120/0x120 [ 135.463997] [] > >> rcu_gp_kthread+0x6bd/0xa30 [ 135.464000] [] ? > >> wake_atomic_t_function+0x70/0x70 [ 135.464003] [] ? > >> force_qs_rnp+0x1b0/0x1b0 [ 135.464006] [] > >> kthread+0xe6/0x100 [ 135.464009] [] ? > >> kthread_worker_fn+0x190/0x190 [ 135.464012] [] > >> ret_from_fork+0x3f/0x70 [ 135.464015] [] ? > >> kthread_worker_fn+0x190/0x190 > > > > How long does it take to reproduce this? If it reproduces in minutes > > or hours, could you please boot with the following on the kernel command > > line and dump the trace buffer shortly after the stall? > > > > ftrace trace_event=sched_waking,sched_wakeup,sched_wake_idle_without_ipi > > The trace I provided above appeared after a few minutes and not again. On previous occasions I had to wait a few hours. I tried running with the above added to the kernel command line but I have not seen the trace yet. I will leave the system overnight but then may risk not capturing the data you need so ... Fair enough... Sounds like you might have the same geologic-time problem that I do when adding tracing. :-/ > > If dumping manually shortly after the stall is at all non-trivial > > (for example, if your reproduction time is many minute or hours), > > I can supply some patches that automate this. Or you can pick > > them up from -rcu: > > ... could you please point me to the patches you refer to? Or would you like me to try with the entire kernel from rcu/dev? 2dc92e2a86b9 (rcu: Awaken grace-period kthread if too long since FQS) c3fd2095d015 (rcu: Dump ftrace buffer when kicking grace-period kthread) There might be other dependencies, but these are the two that you need. Thanx, Paul > > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git > > > > Branch rcu/dev has these patches (and much else besides). > > > > Thanx, Paul > > > > PS: In case you are curious, when I enable those tracepoints, it > > shows me that the timer is firing every three jiffies, as it > > should, but that something happens between the sched_waking > > and the IPI handler that should actually do the wakeup. > > However, adding the traces significantly slows reproduction, > > so I am writing a stress test specific to this bug to try to > > speed things up, hopefully allowing more tracing to be added > > while still retaining non-geologic reproduction times. > > Thank you very much for these details. > > Reinette >