From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753663AbcC1CXx (ORCPT ); Sun, 27 Mar 2016 22:23:53 -0400 Received: from mail.efficios.com ([78.47.125.74]:49322 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753119AbcC1CXw (ORCPT ); Sun, 27 Mar 2016 22:23:52 -0400 Date: Mon, 28 Mar 2016 02:23:45 +0000 (UTC) From: Mathieu Desnoyers To: Peter Zijlstra , "Paul E. McKenney" Cc: "Chatre, Reinette" , Jacob Pan , Josh Triplett , Ross Green , John Stultz , Thomas Gleixner , lkml , Ingo Molnar , Lai Jiangshan , dipankar@in.ibm.com, Andrew Morton , rostedt , David Howells , Eric Dumazet , Darren Hart , =?utf-8?B?RnLDqWTDqXJpYw==?= Weisbecker , Oleg Nesterov , pranith kumar Message-ID: <1124128277.37541.1459131825120.JavaMail.zimbra@efficios.com> In-Reply-To: <683720290.37511.1459129458781.JavaMail.zimbra@efficios.com> References: <20160318235641.GH4287@linux.vnet.ibm.com> <20160326184940.GA23851@linux.vnet.ibm.com> <706246733.37102.1459030977316.JavaMail.zimbra@efficios.com> <20160327013456.GX4287@linux.vnet.ibm.com> <702204510.37291.1459086535844.JavaMail.zimbra@efficios.com> <20160327154018.GA4287@linux.vnet.ibm.com> <20160327204559.GV6356@twins.programming.kicks-ass.net> <683720290.37511.1459129458781.JavaMail.zimbra@efficios.com> Subject: Re: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [78.47.125.74] X-Mailer: Zimbra 8.6.0_GA_1178 (ZimbraWebClient - FF45 (Linux)/8.6.0_GA_1178) Thread-Topic: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17 Thread-Index: Xcj6Jg0f7yh/fJ/sQi9p75pKoR6730qNfB3D Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On Mar 27, 2016, at 9:44 PM, Mathieu Desnoyers mathieu.desnoyers@efficios.com wrote: > ----- On Mar 27, 2016, at 4:45 PM, Peter Zijlstra peterz@infradead.org wrote: > >> On Sun, Mar 27, 2016 at 08:40:18AM -0700, Paul E. McKenney wrote: >>> Oh, and the patch I am running with is below. I am running x86, and so >>> some other architectures would of course need the corresponding patch >>> on that architecture. >> >>> -#define TIF_POLLING_NRFLAG 21 /* idle is polling for TIF_NEED_RESCHED */ >>> +/* #define TIF_POLLING_NRFLAG 21 idle is polling for TIF_NEED_RESCHED */ >> >> x86 is the only arch that really uses this heavily IIRC. >> >> Most of the other archs need interrupts to wake up remote cores. >> >> So what we try to do is avoid sending IPIs when the CPU is idle, for the >> remote wakeup case we use set_nr_if_polling() which sets >> TIF_NEED_RESCHED if TIF_POLLING_NRFLAG was set. If it wasn't, we'll send >> the IPI. Otherwise we rely on the idle loop to do sched_ttwu_pending() >> when it breaks out of loop due to TIF_NEED_RESCHED. >> >> But, you need hotplug for this to happen, right? > > My understanding is that this seems to be detection of failures to be > awakened for a long time on idle CPUs. It therefore seems to be more > idle-related than cpu hotplug-related. I'm not saying that there is > no issue with hotplug, just that the investigation so far seems to > target mostly idle systems, AFAIK without stressing hotplug. > >> >> We should not be migrating towards, or waking on, CPUs no longer present >> in cpu_active_map, and there is a rcu/sched_sync() after clearing that >> bit. Furthermore, migration_call() does a sched_ttwu_pending() (waking >> any remaining stragglers) before we migrate all runnable tasks off the >> dying CPU. >> >> >> >> The other interesting case would be resched_cpu(), which uses >> set_nr_and_not_polling() to kick a remote cpu to call schedule(). It >> atomically sets TIF_NEED_RESCHED and returns if TIF_POLLING_NRFLAG was >> not set. If indeed not, it will send an IPI. >> >> This assumes the idle 'exit' path will do the same as the IPI does; and >> if you look at cpu_idle_loop() it does indeed do both >> preempt_fold_need_resched() and sched_ttwu_pending(). >> >> Note that one cannot rely on irq_enter()/irq_exit() being called for the >> scheduler IPI. > > Looking at commit e3baac47f0e82c4be632f4f97215bb93bf16b342 : > > set_nr_if_polling() returns true if the ti->flags read has the > _TIF_NEED_RESCHED bit set, which will skip the IPI. > > But it seems weird. The side that calls set_nr_if_polling() > does the following: > 1) llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list) > 2) set_nr_if_polling(rq->idle) > 3) (don't do smp_send_reschedule(cpu) since set_nr_if_polling() returned > true) > > The idle loop does: > 1) __current_set_polling() > 2) __current_clr_polling() > 3) smp_mb__after_atomic() > 4) sched_ttwu_pending() > 5) schedule_preempt_disabled() > -> This will clear the TIF_NEED_RESCHED flag > > While the idle loop is in sched_ttwu_pending(), after > it has done the llist_del_all() (thus has grabbed all the > list entries), TIF_NEED_RESCHED is still set. If both list_all and > set_nr_if_polling() are called right after the llist_del_all(), we > will end up in a situation where we have an entry in the list, but > there won't be any reschedule sent on the idle CPU until something > else awakens it. On a _very_ idle CPU, this could take some time. > > set_nr_and_not_polling() don't seem to have the same issue, because > it does not return true if TIF_NEED_RESCHED is observed as being > already set: it really just depends on the state of the TIF_POLLING_NRFLAG > bit. > > Am I missing something important ? Well, it seems that the test for _TIF_POLLING_NRFLAG in set_nr_if_polling() just before the test for _TIF_NEED_RESCHED should take care of it: while in sched_ttwu_pending within the idle loop, the TIF_POLLING_NRFLAG should be cleared, thus causing set_nr_if_polling to return false. I'm slightly concerned about the lack of smp_mb__after_atomic() between the TIF_NEED_RESCHED flag being cleared within schedule_preempt_disabled and the TIF_POLLING_NRFLAG being set in the following loop. Indeed, clear_bit() does not have a compiler barrier, nor processor-level memory barriers (of course, the processor memory barrier should not really matter on x86-64 due to lock prefix). Moreover, TIF_NEED_RESCHED is bit 3 on x86-64, whereas TIF_POLLING_NRFLAG is bit 21. Those are in two different bytes of the thread flags, and thus set/cleared as different addresses by clear_bit() acting on an immediate "nr" argument. If we have any state where TIF_POLLING_NRFLAG is set before TIF_NEED_RESCHED is cleared within the idle thread, we could end up missing a needed resched IPI. Another question: why are set_nr_if_polling and set_nr_and_not_polling two different implementations ? Could they be combined ? Thanks, Mathieu > > Thanks, > > Mathieu > > -- > Mathieu Desnoyers > EfficiOS Inc. > http://www.efficios.com -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com