From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl0-f67.google.com ([209.85.160.67]:39588 "EHLO mail-pl0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750903AbeFGUMF (ORCPT ); Thu, 7 Jun 2018 16:12:05 -0400 Received: by mail-pl0-f67.google.com with SMTP id f1-v6so6819390plt.6 for ; Thu, 07 Jun 2018 13:12:05 -0700 (PDT) From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Steven Rostedt , Peter Zijlstra , Ingo Molnar , Linus Torvalds , Mathieu Desnoyers , Tom Zanussi , Namhyung Kim , Thomas Glexiner , Boqun Feng , Paul McKenney , Masami Hiramatsu , Todd Kjos , Erick Reyes , Julia Cartwright , Byungchul Park , stable@vger.kernel.org Subject: [PATCH RESEND] softirq: reorder trace_softirqs_on to prevent lockdep splat Date: Thu, 7 Jun 2018 13:11:43 -0700 Message-Id: <20180607201143.247775-1-joel@joelfernandes.org> Sender: stable-owner@vger.kernel.org List-ID: From: "Joel Fernandes (Google)" I'm able to reproduce a lockdep splat with config options: CONFIG_PROVE_LOCKING=y, CONFIG_DEBUG_LOCK_ALLOC=y and CONFIG_PREEMPTIRQ_EVENTS=y $ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable [ 26.112609] DEBUG_LOCKS_WARN_ON(current->softirqs_enabled) [ 26.112636] WARNING: CPU: 0 PID: 118 at kernel/locking/lockdep.c:3854 [...] [ 26.144229] Call Trace: [ 26.144926] [ 26.145506] lock_acquire+0x55/0x1b0 [ 26.146499] ? __do_softirq+0x46f/0x4d9 [ 26.147571] ? __do_softirq+0x46f/0x4d9 [ 26.148646] trace_preempt_on+0x8f/0x240 [ 26.149744] ? trace_preempt_on+0x4d/0x240 [ 26.150862] ? __do_softirq+0x46f/0x4d9 [ 26.151930] preempt_count_sub+0x18a/0x1a0 [ 26.152985] __do_softirq+0x46f/0x4d9 [ 26.153937] irq_exit+0x68/0xe0 [ 26.154755] smp_apic_timer_interrupt+0x271/0x280 [ 26.156056] apic_timer_interrupt+0xf/0x20 [ 26.157105] The issue was this: preempt_count = 1 << SOFTIRQ_SHIFT __local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) { if (softirq_count() == (cnt && SOFTIRQ_MASK)) { trace_softirqs_on() { current->softirqs_enabled = 1; } } preempt_count_sub(cnt) { trace_preempt_on() { tracepoint() { rcu_read_lock_sched() { // jumps into lockdep Where preempt_count still has softirqs disabled, but current->softirqs_enabled is true, and we get a splat. Cc: Steven Rostedt Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Linus Torvalds Cc: Mathieu Desnoyers Cc: Tom Zanussi Cc: Namhyung Kim Cc: Thomas Glexiner Cc: Boqun Feng Cc: Paul McKenney Cc: Masami Hiramatsu Cc: Todd Kjos Cc: Erick Reyes Cc: Julia Cartwright Cc: Byungchul Park Cc: stable@vger.kernel.org Reviewed-by: Steven Rostedt (VMware) Fixes: d59158162e032 ("tracing: Add support for preempt and irq enable/disable events") Signed-off-by: Joel Fernandes (Google) --- kernel/softirq.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/kernel/softirq.c b/kernel/softirq.c index 177de3640c78..8a040bcaa033 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt) { lockdep_assert_irqs_disabled(); + if (preempt_count() == cnt) + trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip()); + if (softirq_count() == (cnt & SOFTIRQ_MASK)) trace_softirqs_on(_RET_IP_); - preempt_count_sub(cnt); + + __preempt_count_sub(cnt); } /* -- 2.17.1.1185.g55be947832-goog