From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 9 Apr 2015 14:41:10 +0200 From: Gilles Chanteperdrix Message-ID: <20150409124110.GY20752@hermes.click-hack.org> References: <20150313163431.GE1497@hermes.click-hack.org> <550319B3.1050902@siemens.com> <20150313171211.GH1497@hermes.click-hack.org> <20150402191555.GK31175@hermes.click-hack.org> <20150402204139.GL31175@hermes.click-hack.org> <55264097.2010203@siemens.com> <5526430E.8030808@siemens.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5526430E.8030808@siemens.com> Subject: Re: [Xenomai] xeno3_rc3 - Watchdog detected hard LOCKUP List-Id: Discussions about the Xenomai project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: "xenomai@xenomai.org" On Thu, Apr 09, 2015 at 11:14:54AM +0200, Jan Kiszka wrote: > On 2015-04-09 11:04, Jan Kiszka wrote: > > On 2015-04-08 23:02, Jeroen Van den Keybus wrote: > >> It took a while, but a hard lockup occurred on Xenomai 3.0-rc4 with > >> Linux 3.16.7 running dohell. This time, I believe I have a trace of > >> the locked up CPU. It's listed below and for completeness, the first > >> part of the dmesg log is attached as well. > >> > >> [419215.683857] Kernel panic - not syncing: Watchdog detected hard > >> LOCKUP on cpu 3 > >> [419215.683886] CPU: 3 PID: 18835 Comm: dohell Not tainted 3.16.7-cobalt #1 > >> [419215.683903] Hardware name: Supermicro X10SAE/X10SAE, BIOS 2.0a 05/09/2014 > >> [419215.683920] 0000000000000000 ffff88021fb86c38 ffffffff8175761d > >> ffffffff81a8a1e8 > >> [419215.683945] ffff88021fb86cb0 ffffffff81752c0e 0000000000000010 > >> ffff88021fb86cc0 > >> [419215.683968] ffff88021fb86c60 0000000000000000 0000000000000003 > >> 000000000001999e > >> [419215.684095] Call Trace: > >> [419215.684103] [] dump_stack+0x45/0x56 > >> [419215.684125] [] panic+0xd8/0x20a > >> [419215.684141] [] watchdog_overflow_callback+0xc2/0xd0 > >> [419215.684158] [] __perf_event_overflow+0x8d/0x230 > >> [419215.684174] [] perf_event_overflow+0x14/0x20 > >> [419215.684190] [] intel_pmu_handle_irq+0x1e6/0x400 > >> [419215.684259] [] ? unmap_kernel_range_noflush+0x11/0x20 > >> [419215.684277] [] perf_event_nmi_handler+0x2b/0x50 > >> [419215.684293] [] nmi_handle+0x88/0x120 > >> [419215.684308] [] default_do_nmi+0xce/0x130 > >> [419215.684373] [] do_nmi+0xd0/0xf0 > >> [419215.684387] [] end_repeat_nmi+0x1e/0x2e > >> [419215.684402] [] ? _raw_spin_lock+0x2a/0x40 > >> [419215.684417] [] ? _raw_spin_lock+0x2a/0x40 > >> [419215.684431] [] ? _raw_spin_lock+0x2a/0x40 > >> [419215.684445] <> [] > >> __ipipe_pin_range_globally+0x7c/0x2b0 > >> [419215.684468] [] ioremap_page_range+0x226/0x300 > >> [419215.684485] [] ? xnintr_core_clock_handler+0x2ea/0x310 > >> [419215.684553] [] ? update_curr+0x80/0x180 > >> [419215.684568] [] ghes_copy_tofrom_phys+0x1e9/0x200 > > > > OK, maybe it is related to ACPI APEI, maybe that is just triggering an > > I-pipe bug. But could you try to disable that feature and see if the > > issue still appears? > > > > I'll meanwhile dig deeper and try to understand what could cause a lockup. > > Oh, the bug is obvious (and would have been reported when turning on > CONFIG_PROVE_LOCKING): We are calling __ipipe_pin_range_globally from > IRQ context here, but that only uses spin_lock. ipipe_pin_range_globally is called in case of vmalloc or ioremap. Is there really any code which calls vmalloc or ioremap from irq context ? I doubt that very much. -- Gilles.