From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032838AbbKEOFG (ORCPT ); Thu, 5 Nov 2015 09:05:06 -0500 Received: from e35.co.us.ibm.com ([32.97.110.153]:58725 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030850AbbKEOFE (ORCPT ); Thu, 5 Nov 2015 09:05:04 -0500 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Thu, 5 Nov 2015 06:04:46 -0800 From: "Paul E. McKenney" To: Andi Kleen Cc: Peter Zijlstra , Dave Jones , Linux Kernel , Ingo Molnar , Stephane Eranian Subject: Re: perf related lockdep bug Message-ID: <20151105140446.GD29027@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20151104051717.GA6098@codemonkey.org.uk> <20151104102151.GG17308@twins.programming.kicks-ass.net> <20151104102800.GZ11639@twins.programming.kicks-ass.net> <20151104105010.GA11639@twins.programming.kicks-ass.net> <20151104134838.GR29027@linux.vnet.ibm.com> <20151104205836.GZ3533@two.firstfloor.org> <20151105005528.GX29027@linux.vnet.ibm.com> <20151105024607.GD3533@two.firstfloor.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151105024607.GD3533@two.firstfloor.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15110514-0013-0000-0000-000019FA0B4C Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 05, 2015 at 03:46:07AM +0100, Andi Kleen wrote: > On Wed, Nov 04, 2015 at 04:55:28PM -0800, Paul E. McKenney wrote: > > On Wed, Nov 04, 2015 at 09:58:36PM +0100, Andi Kleen wrote: > > > > > > I tested my perf stress workload with the patch applied on 4.3, > > > unfortunately got a hang again :-/ > > > > Any diagnostics, sysrq-T output, or whatever? > > The system was dead in the screen saver, no serial console. > I restarted it and may have some more information tomorrow. > > The previous hangs were usually timeout on smp_call_function, > so one CPU disappeared. > > I'll try the extra hunk you pointed out. Thank you, looking forward to seeing what you get from this. Thanx, Paul