From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752181AbbDCFn1 (ORCPT ); Fri, 3 Apr 2015 01:43:27 -0400 Received: from mail-wi0-f169.google.com ([209.85.212.169]:38353 "EHLO mail-wi0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751998AbbDCFn0 (ORCPT ); Fri, 3 Apr 2015 01:43:26 -0400 Date: Fri, 3 Apr 2015 07:43:21 +0200 From: Ingo Molnar To: Chris J Arges Cc: Linus Torvalds , Rafael David Tinoco , Peter Anvin , Jiang Liu , Peter Zijlstra , LKML , Jens Axboe , Frederic Weisbecker , Gema Gomez , the arch/x86 maintainers Subject: [PATCH] smp/call: Detect stuck CSD locks Message-ID: <20150403054320.GA9863@gmail.com> References: <20150331222327.GA12512@canonical.com> <20150401124336.GB12841@gmail.com> <20150401161047.GD12730@canonical.com> <551C6A48.9060805@canonical.com> <20150402182607.GA8896@gmail.com> <551D8FAF.5070805@canonical.com> <20150402190725.GA10570@gmail.com> <551DB0E2.1020607@canonical.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <551DB0E2.1020607@canonical.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Chris J Arges wrote: > Ingo, > > I think tracking IPI calls from 'generic_exec_single' would make a lot > of sense. When you say poll for completion do you mean a loop after > 'arch_send_call_function_single_ipi' in kernel/smp.c? My main concern > would be to not alter the timings too much so we can still reproduce the > original problem. > > Another approach: > If we want to check for non-ACKed IPIs a possibility would be to add a > timestamp field to 'struct call_single_data' and just record jiffies > when the IPI gets called. Then have a per-cpu kthread check the > 'call_single_queue' percpu list periodically if (jiffies - timestamp) > > THRESHOLD. When we reach that condition print the stale entry in > call_single_queue, backtrace, then re-send the IPI. > > Let me know what makes the most sense to hack on. Well, the thing is, putting this polling into an async kernel thread loses a lot of data context and right of reference that we might need to re-send an IPI. And if the context is not lost we might as well send it from the original, still-looping context - which is a lot simpler as well. ( ... and on a deadlocked non-CONFIG_PREEMPT kernel the kernel thread won't run at all, so it won't be able to detect deadlocks. ) So I'd really suggest instrumenting the already existing CSD polling, which is already a slowpath, so it won't impact timing much. I'd suggest the following, rather unintrusive approach: - first just take a jiffies timestamp and generate a warning message if more than 10 seconds elapsed after sending the IPI, without having heard from it. - then the IPI is resent. This means adding a bit of a control flow to csd_lock_wait(). Something like the patch below, which implements both steps: - It will detect and print CSD deadlocks both in the single- and multi- function call APIs, and in the pre-IPI CSD lock wait case as well. - It will resend an IPI if possible - It generates various messages in the deadlock case that should give us some idea about how the deadlock played out and whether it got resolved. The timeout is set to 10 seconds, that should be plenty even in a virtualization environment. Only very lightly tested under a simple lkvm bootup: I verified that the boilerplate message is displayed, and that it doesn't generate false positive messages in light loads - but I haven't checked whether the deadlock detection works at all. Thanks, Ingo --- kernel/smp.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 46 insertions(+), 5 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index f38a1e692259..e0eec1ab3ef2 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -98,22 +98,63 @@ void __init call_function_init(void) register_cpu_notifier(&hotplug_cfd_notifier); } +/* Locking timeout in ms: */ +#define CSD_LOCK_TIMEOUT (10*1000ULL) + +/* Print this ID in every printk line we output, to be able to easily sort them apart: */ +static int csd_bug_count; + /* * csd_lock/csd_unlock used to serialize access to per-cpu csd resources * * For non-synchronous ipi calls the csd can still be in use by the * previous function call. For multi-cpu calls its even more interesting * as we'll have to ensure no other cpu is observing our csd. + * + * ( The overhead of deadlock detection is not a big problem, this is a + * cpu_relax() loop that is actively wasting CPU cycles to poll for + * completion. ) */ -static void csd_lock_wait(struct call_single_data *csd) +static void csd_lock_wait(struct call_single_data *csd, int cpu) { - while (csd->flags & CSD_FLAG_LOCK) + int bug_id = 0; + u64 ts0, ts1, ts_delta; + + ts0 = jiffies_to_msecs(jiffies); + + if (unlikely(!csd_bug_count)) { + csd_bug_count++; + printk("csd: CSD deadlock debugging initiated!\n"); + } + + while (csd->flags & CSD_FLAG_LOCK) { + ts1 = jiffies_to_msecs(jiffies); + + ts_delta = ts1-ts0; + if (unlikely(ts_delta >= CSD_LOCK_TIMEOUT)) { /* Uh oh, it took too long. Why? */ + + bug_id = csd_bug_count; + csd_bug_count++; + + ts0 = ts1; /* Re-start the timeout detection */ + + printk("csd: Detected non-responsive CSD lock (#%d) on CPU#%02d, waiting %Ld.%03Ld secs for CPU#%02d\n", + bug_id, raw_smp_processor_id(), ts_delta/1000ULL, ts_delta % 1000ULL, cpu); + if (cpu >= 0) { + printk("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", bug_id, raw_smp_processor_id(), cpu); + arch_send_call_function_single_ipi(cpu); + } + dump_stack(); + } cpu_relax(); + } + if (unlikely(bug_id)) + printk("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock after all. Phew!\n", bug_id, raw_smp_processor_id(), cpu); } static void csd_lock(struct call_single_data *csd) { - csd_lock_wait(csd); + csd_lock_wait(csd, -1); csd->flags |= CSD_FLAG_LOCK; /* @@ -191,7 +232,7 @@ static int generic_exec_single(int cpu, struct call_single_data *csd, arch_send_call_function_single_ipi(cpu); if (wait) - csd_lock_wait(csd); + csd_lock_wait(csd, cpu); return 0; } @@ -446,7 +487,7 @@ void smp_call_function_many(const struct cpumask *mask, struct call_single_data *csd; csd = per_cpu_ptr(cfd->csd, cpu); - csd_lock_wait(csd); + csd_lock_wait(csd, cpu); } } }