From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752627AbcC0VGh (ORCPT ); Sun, 27 Mar 2016 17:06:37 -0400 Received: from e18.ny.us.ibm.com ([129.33.205.208]:59966 "EHLO e18.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751629AbcC0VGg (ORCPT ); Sun, 27 Mar 2016 17:06:36 -0400 X-IBM-Helo: d01dlp02.pok.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Sun, 27 Mar 2016 14:06:41 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Mathieu Desnoyers , "Chatre, Reinette" , Jacob Pan , Josh Triplett , Ross Green , John Stultz , Thomas Gleixner , lkml , Ingo Molnar , Lai Jiangshan , dipankar@in.ibm.com, Andrew Morton , rostedt , David Howells , Eric Dumazet , Darren Hart , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Oleg Nesterov , pranith kumar Subject: Re: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17 Message-ID: <20160327210641.GB4287@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <0D818C7A2259ED42912C1E04120FDE26712E676E@ORSMSX111.amr.corp.intel.com> <20160325214623.GR4287@linux.vnet.ibm.com> <1370753660.36931.1458995371427.JavaMail.zimbra@efficios.com> <20160326152816.GW4287@linux.vnet.ibm.com> <20160326184940.GA23851@linux.vnet.ibm.com> <706246733.37102.1459030977316.JavaMail.zimbra@efficios.com> <20160327013456.GX4287@linux.vnet.ibm.com> <702204510.37291.1459086535844.JavaMail.zimbra@efficios.com> <20160327154018.GA4287@linux.vnet.ibm.com> <20160327204559.GV6356@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160327204559.GV6356@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16032721-0045-0000-0000-000003C2905E Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Mar 27, 2016 at 10:45:59PM +0200, Peter Zijlstra wrote: > On Sun, Mar 27, 2016 at 08:40:18AM -0700, Paul E. McKenney wrote: > > Oh, and the patch I am running with is below. I am running x86, and so > > some other architectures would of course need the corresponding patch > > on that architecture. > > > -#define TIF_POLLING_NRFLAG 21 /* idle is polling for TIF_NEED_RESCHED */ > > +/* #define TIF_POLLING_NRFLAG 21 idle is polling for TIF_NEED_RESCHED */ > > x86 is the only arch that really uses this heavily IIRC. > > Most of the other archs need interrupts to wake up remote cores. > > So what we try to do is avoid sending IPIs when the CPU is idle, for the > remote wakeup case we use set_nr_if_polling() which sets > TIF_NEED_RESCHED if TIF_POLLING_NRFLAG was set. If it wasn't, we'll send > the IPI. Otherwise we rely on the idle loop to do sched_ttwu_pending() > when it breaks out of loop due to TIF_NEED_RESCHED. > > But, you need hotplug for this to happen, right? I do, but Ross Green is seeing something that looks similar, and without CPU hotplug. > We should not be migrating towards, or waking on, CPUs no longer present > in cpu_active_map, and there is a rcu/sched_sync() after clearing that > bit. Furthermore, migration_call() does a sched_ttwu_pending() (waking > any remaining stragglers) before we migrate all runnable tasks off the > dying CPU. OK, so I should instrument migration_call() if I get the repro rate up? > The other interesting case would be resched_cpu(), which uses > set_nr_and_not_polling() to kick a remote cpu to call schedule(). It > atomically sets TIF_NEED_RESCHED and returns if TIF_POLLING_NRFLAG was > not set. If indeed not, it will send an IPI. > > This assumes the idle 'exit' path will do the same as the IPI does; and > if you look at cpu_idle_loop() it does indeed do both > preempt_fold_need_resched() and sched_ttwu_pending(). > > Note that one cannot rely on irq_enter()/irq_exit() being called for the > scheduler IPI. OK, thank you for the info! Any specific debug actions? Thanx, Paul