From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9A3CC7EE2A for ; Thu, 4 May 2023 22:15:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229795AbjEDWPI (ORCPT ); Thu, 4 May 2023 18:15:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjEDWPH (ORCPT ); Thu, 4 May 2023 18:15:07 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1931411DB1 for ; Thu, 4 May 2023 15:15:06 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-64115e652eeso15802019b3a.0 for ; Thu, 04 May 2023 15:15:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=Z0Cg9qHletOFAvqqVmZw7+26EFSWZpwbZiJS2V2FzLapfu8jT4QeFu5W5+Xv1jW0Oq GuoA3ZwWRVBan5ARtRKt+4DdoQDQBVFcxWi+ugzNZIgxtDb+wtJKzKidIFwdlM1tvBKd TTJJddtrSqAKaOI+ubOmwuYJysH9mu4yCN9nw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=MMx9g21P8v+9b3EATsvlKLmr5jLUYVswg9g4tPam2XDPryutloUy1Bx4Mf82bASlN4 UKBMqyHLHHARvwqgGiidafl38oQFZop7eZ0q4r7Pzlwi/8ZT4n1duZK289xfJP1JuC1E YX0A3v7LQJlA6sN6Yrbh8xTyRVp3Q/m9gMa+zZmNRTYf+tTNzW6NA3awZuBBdMuDb1VU eVyb/xfh3MlIzXgH4ktCFGrrVKx6+wdlNdbbzjJr4OsAOnTbvyT/KNBqjfuWuRtNrgiL +H4OTRarq5vOCYQjftngp3hRIiTobAVMehNwovb0HCqujasq6bWlobasHxC/9y6sFv75 EWIw== X-Gm-Message-State: AC+VfDz2dV5hsZ6/amUNECl9T9AKVxbMPcYVOcGeatKEtjtO7T9EtFGG UtW/SFnguhAjUwNzTy/IF/JMiA== X-Google-Smtp-Source: ACHHUZ64Z5QeW0ZZuBfj8VjVBHEXIEO7+gyHgZ2DgxLg3p6tdlwuBFv++p3xyyGQU6N9iOTo3WaeRQ== X-Received: by 2002:a05:6a20:938b:b0:f2:2185:8575 with SMTP id x11-20020a056a20938b00b000f221858575mr3718625pzh.23.1683238505737; Thu, 04 May 2023 15:15:05 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:edf0:7321:6b9e:d5e7]) by smtp.gmail.com with ESMTPSA id g26-20020aa7819a000000b006437c0edf9csm169615pfi.16.2023.05.04.15.15.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 15:15:04 -0700 (PDT) From: Douglas Anderson To: Petr Mladek , Andrew Morton Cc: Sumit Garg , Mark Rutland , Matthias Kaehlcke , Stephane Eranian , Stephen Boyd , ricardo.neri@intel.com, Tzung-Bi Shih , Lecopzer Chen , kgdb-bugreport@lists.sourceforge.net, Masayoshi Mizuma , Guenter Roeck , Pingfan Liu , Andi Kleen , Ian Rogers , linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, ito-yuichi@fujitsu.com, Randy Dunlap , Chen-Yu Tsai , christophe.leroy@csgroup.eu, davem@davemloft.net, sparclinux@vger.kernel.org, mpe@ellerman.id.au, Will Deacon , ravi.v.shankar@intel.com, npiggin@gmail.com, linuxppc-dev@lists.ozlabs.org, Marc Zyngier , Catalin Marinas , Daniel Thompson , Douglas Anderson Subject: [PATCH v4 07/17] watchdog/hardlockup: Move perf hardlockup checking/panic to common watchdog.c Date: Thu, 4 May 2023 15:13:39 -0700 Message-ID: <20230504151100.v4.7.Id4133d3183e798122dc3b6205e7852601f289071@changeid> X-Mailer: git-send-email 2.40.1.521.gf1e218fcd8-goog In-Reply-To: <20230504221349.1535669-1-dianders@chromium.org> References: <20230504221349.1535669-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org The perf hardlockup detector works by looking at interrupt counts and seeing if they change from run to run. The interrupt counts are managed by the common watchdog code via its watchdog_timer_fn(). Currently the API between the perf detector and the common code is a function: is_hardlockup(). When the hard lockup detector sees that function return true then it handles printing out debug info and inducing a panic if necessary. Let's change the API a little bit in preparation for the buddy hardlockup detector. The buddy hardlockup detector wants to print nearly the same debug info and have nearly the same panic behavior. That means we want to move all that code to the common file. For now, the code in the common file will only be there if the perf hardlockup detector is enabled, but eventually it will be selected by a common config. Right now, this _just_ moves the code from the perf detector file to the common file and changes the names. It doesn't make the changes that the buddy hardlockup detector will need and doesn't do any style cleanups. A future patch will do cleanup to make it more obvious what changed. With the above, we no longer have any callers of is_hardlockup() outside of the "watchdog.c" file, so we can remove it from the header, make it static, move it to the same "#ifdef" block as our new watchdog_hardlockup_check(), and rename it to make it obvious it's just for hardlockup detectors. While doing this, it can be noted that even if no hardlockup detectors were configured the existing code used to still have the code for counting/checking "hrtimer_interrupts" even if the perf hardlockup detector wasn't configured. We didn't need to do that, so move all the "hrtimer_interrupts" counting to only be there if the perf hardlockup detector is configured as well. This change is expected to be a no-op. Signed-off-by: Douglas Anderson --- Changes in v4: - ("Move perf hardlockup checking/panic ...") new for v4. include/linux/nmi.h | 5 ++- kernel/watchdog.c | 92 +++++++++++++++++++++++++++++++++--------- kernel/watchdog_perf.c | 42 +------------------ 3 files changed, 78 insertions(+), 61 deletions(-) diff --git a/include/linux/nmi.h b/include/linux/nmi.h index 35d09d70f394..c6cb9bc5dc80 100644 --- a/include/linux/nmi.h +++ b/include/linux/nmi.h @@ -15,7 +15,6 @@ void lockup_detector_init(void); void lockup_detector_soft_poweroff(void); void lockup_detector_cleanup(void); -bool is_hardlockup(void); extern int watchdog_user_enabled; extern int nmi_watchdog_user_enabled; @@ -88,6 +87,10 @@ extern unsigned int hardlockup_panic; static inline void hardlockup_detector_disable(void) {} #endif +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) +void watchdog_hardlockup_check(struct pt_regs *regs); +#endif + #if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR) # define NMI_WATCHDOG_SYSCTL_PERM 0644 #else diff --git a/kernel/watchdog.c b/kernel/watchdog.c index c705a18b26bf..2d319cdf64b9 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -85,6 +85,78 @@ __setup("nmi_watchdog=", hardlockup_panic_setup); #endif /* CONFIG_HARDLOCKUP_DETECTOR */ +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) + +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); +static DEFINE_PER_CPU(bool, hard_watchdog_warn); +static unsigned long hardlockup_allcpu_dumped; + +static bool watchdog_hardlockup_is_lockedup(void) +{ + unsigned long hrint = __this_cpu_read(hrtimer_interrupts); + + if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) + return true; + + __this_cpu_write(hrtimer_interrupts_saved, hrint); + return false; +} + +static void watchdog_hardlockup_interrupt_count(void) +{ + __this_cpu_inc(hrtimer_interrupts); +} + +void watchdog_hardlockup_check(struct pt_regs *regs) +{ + /* check for a hardlockup + * This is done by making sure our timer interrupt + * is incrementing. The timer interrupt should have + * fired multiple times before we overflow'd. If it hasn't + * then this is a good indication the cpu is stuck + */ + if (watchdog_hardlockup_is_lockedup()) { + int this_cpu = smp_processor_id(); + + /* only print hardlockups once */ + if (__this_cpu_read(hard_watchdog_warn) == true) + return; + + pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", + this_cpu); + print_modules(); + print_irqtrace_events(current); + if (regs) + show_regs(regs); + else + dump_stack(); + + /* + * Perform all-CPU dump only once to avoid multiple hardlockups + * generating interleaving traces + */ + if (sysctl_hardlockup_all_cpu_backtrace && + !test_and_set_bit(0, &hardlockup_allcpu_dumped)) + trigger_allbutself_cpu_backtrace(); + + if (hardlockup_panic) + nmi_panic(regs, "Hard LOCKUP"); + + __this_cpu_write(hard_watchdog_warn, true); + return; + } + + __this_cpu_write(hard_watchdog_warn, false); + return; +} + +#else /* CONFIG_HARDLOCKUP_DETECTOR_PERF */ + +static inline void watchdog_hardlockup_interrupt_count(void) { } + +#endif /* !CONFIG_HARDLOCKUP_DETECTOR_PERF */ + /* * These functions can be overridden if an architecture implements its * own hardlockup detector. @@ -176,8 +248,6 @@ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts); static DEFINE_PER_CPU(unsigned long, watchdog_report_ts); static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer); static DEFINE_PER_CPU(bool, softlockup_touch_sync); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); static unsigned long soft_lockup_nmi_warn; static int __init nowatchdog_setup(char *str) @@ -312,22 +382,6 @@ static int is_softlockup(unsigned long touch_ts, } /* watchdog detector functions */ -bool is_hardlockup(void) -{ - unsigned long hrint = __this_cpu_read(hrtimer_interrupts); - - if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) - return true; - - __this_cpu_write(hrtimer_interrupts_saved, hrint); - return false; -} - -static void watchdog_interrupt_count(void) -{ - __this_cpu_inc(hrtimer_interrupts); -} - static DEFINE_PER_CPU(struct completion, softlockup_completion); static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); @@ -359,7 +413,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) return HRTIMER_NORESTART; /* kick the hardlockup detector */ - watchdog_interrupt_count(); + watchdog_hardlockup_interrupt_count(); /* kick the softlockup detector */ if (completion_done(this_cpu_ptr(&softlockup_completion))) { diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c index c3d8ceb149da..5f3651b87ee7 100644 --- a/kernel/watchdog_perf.c +++ b/kernel/watchdog_perf.c @@ -20,13 +20,11 @@ #include #include -static DEFINE_PER_CPU(bool, hard_watchdog_warn); static DEFINE_PER_CPU(bool, watchdog_nmi_touch); static DEFINE_PER_CPU(struct perf_event *, watchdog_ev); static DEFINE_PER_CPU(struct perf_event *, dead_event); static struct cpumask dead_events_mask; -static unsigned long hardlockup_allcpu_dumped; static atomic_t watchdog_cpus = ATOMIC_INIT(0); notrace void arch_touch_nmi_watchdog(void) @@ -122,45 +120,7 @@ static void watchdog_overflow_callback(struct perf_event *event, if (!watchdog_check_timestamp()) return; - /* check for a hardlockup - * This is done by making sure our timer interrupt - * is incrementing. The timer interrupt should have - * fired multiple times before we overflow'd. If it hasn't - * then this is a good indication the cpu is stuck - */ - if (is_hardlockup()) { - int this_cpu = smp_processor_id(); - - /* only print hardlockups once */ - if (__this_cpu_read(hard_watchdog_warn) == true) - return; - - pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", - this_cpu); - print_modules(); - print_irqtrace_events(current); - if (regs) - show_regs(regs); - else - dump_stack(); - - /* - * Perform all-CPU dump only once to avoid multiple hardlockups - * generating interleaving traces - */ - if (sysctl_hardlockup_all_cpu_backtrace && - !test_and_set_bit(0, &hardlockup_allcpu_dumped)) - trigger_allbutself_cpu_backtrace(); - - if (hardlockup_panic) - nmi_panic(regs, "Hard LOCKUP"); - - __this_cpu_write(hard_watchdog_warn, true); - return; - } - - __this_cpu_write(hard_watchdog_warn, false); - return; + watchdog_hardlockup_check(regs); } static int hardlockup_detector_event_create(void) -- 2.40.1.521.gf1e218fcd8-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F0FEC77B7C for ; Thu, 4 May 2023 22:21:55 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4QC7b615Zyz3fdN for ; Fri, 5 May 2023 08:21:54 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=Z0Cg9qHl; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=chromium.org (client-ip=2607:f8b0:4864:20::42c; helo=mail-pf1-x42c.google.com; envelope-from=dianders@chromium.org; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=Z0Cg9qHl; dkim-atps=neutral Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4QC7RG24vMz3bkl for ; Fri, 5 May 2023 08:15:06 +1000 (AEST) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-64115e652eeso15802023b3a.0 for ; Thu, 04 May 2023 15:15:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=Z0Cg9qHletOFAvqqVmZw7+26EFSWZpwbZiJS2V2FzLapfu8jT4QeFu5W5+Xv1jW0Oq GuoA3ZwWRVBan5ARtRKt+4DdoQDQBVFcxWi+ugzNZIgxtDb+wtJKzKidIFwdlM1tvBKd TTJJddtrSqAKaOI+ubOmwuYJysH9mu4yCN9nw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=jU6MFH91jAmNOcwk6hE5vv2Np4osLzcUO/HvOzenVwrYZnpr662g6IRvBUT5lc7sAG 5qDpkSi2EBGg+zqjdxJMWWj/v5DgY/kf0iG7Y81+3Kq3W0Wwja/Bac0oCtcpzO/8bsVS WkeDD7OkAaHtwWurlB7QwDrdlzo9SKjJjsSSEpGL1RNAei4aYNg46/0hx63xE7V2Vc01 5snU82QQzWCVdBMVH0BrsFCLNWQVMZxJBpKFa2zOY6+nq6e//JusMi8Rw3acImoJNBp9 kFEON8AHYEAlPbDUSyUfIsa95SoHqjRrWmFbqgm9bj0otwjDtERmBRpjJOC9WEXlBhSI YSMA== X-Gm-Message-State: AC+VfDyTYX5Ops+DhkKcmgVw/arHsrArLm7vhQaa7NDceWeVlMiyoDjh dcbe4aFCHihawZzVQMQR7qLUuQ== X-Google-Smtp-Source: ACHHUZ64Z5QeW0ZZuBfj8VjVBHEXIEO7+gyHgZ2DgxLg3p6tdlwuBFv++p3xyyGQU6N9iOTo3WaeRQ== X-Received: by 2002:a05:6a20:938b:b0:f2:2185:8575 with SMTP id x11-20020a056a20938b00b000f221858575mr3718625pzh.23.1683238505737; Thu, 04 May 2023 15:15:05 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:edf0:7321:6b9e:d5e7]) by smtp.gmail.com with ESMTPSA id g26-20020aa7819a000000b006437c0edf9csm169615pfi.16.2023.05.04.15.15.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 15:15:04 -0700 (PDT) From: Douglas Anderson To: Petr Mladek , Andrew Morton Subject: [PATCH v4 07/17] watchdog/hardlockup: Move perf hardlockup checking/panic to common watchdog.c Date: Thu, 4 May 2023 15:13:39 -0700 Message-ID: <20230504151100.v4.7.Id4133d3183e798122dc3b6205e7852601f289071@changeid> X-Mailer: git-send-email 2.40.1.521.gf1e218fcd8-goog In-Reply-To: <20230504221349.1535669-1-dianders@chromium.org> References: <20230504221349.1535669-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Ian Rogers , Randy Dunlap , Lecopzer Chen , kgdb-bugreport@lists.sourceforge.net, ricardo.neri@intel.com, Stephane Eranian , Guenter Roeck , sparclinux@vger.kernel.org, Will Deacon , Daniel Thompson , Andi Kleen , Chen-Yu Tsai , Matthias Kaehlcke , Catalin Marinas , Masayoshi Mizuma , ravi.v.shankar@intel.com, Tzung-Bi Shih , npiggin@gmail.com, Stephen Boyd , Pingfan Liu , linux-arm-kernel@lists.infradead.org, Sumit Garg , ito-yuichi@fujitsu.com, Douglas Anderson , linux-perf-users@vger.kernel.org, Marc Zyngier , linuxppc-dev@lists.ozlabs.org, davem@davemloft.net Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" The perf hardlockup detector works by looking at interrupt counts and seeing if they change from run to run. The interrupt counts are managed by the common watchdog code via its watchdog_timer_fn(). Currently the API between the perf detector and the common code is a function: is_hardlockup(). When the hard lockup detector sees that function return true then it handles printing out debug info and inducing a panic if necessary. Let's change the API a little bit in preparation for the buddy hardlockup detector. The buddy hardlockup detector wants to print nearly the same debug info and have nearly the same panic behavior. That means we want to move all that code to the common file. For now, the code in the common file will only be there if the perf hardlockup detector is enabled, but eventually it will be selected by a common config. Right now, this _just_ moves the code from the perf detector file to the common file and changes the names. It doesn't make the changes that the buddy hardlockup detector will need and doesn't do any style cleanups. A future patch will do cleanup to make it more obvious what changed. With the above, we no longer have any callers of is_hardlockup() outside of the "watchdog.c" file, so we can remove it from the header, make it static, move it to the same "#ifdef" block as our new watchdog_hardlockup_check(), and rename it to make it obvious it's just for hardlockup detectors. While doing this, it can be noted that even if no hardlockup detectors were configured the existing code used to still have the code for counting/checking "hrtimer_interrupts" even if the perf hardlockup detector wasn't configured. We didn't need to do that, so move all the "hrtimer_interrupts" counting to only be there if the perf hardlockup detector is configured as well. This change is expected to be a no-op. Signed-off-by: Douglas Anderson --- Changes in v4: - ("Move perf hardlockup checking/panic ...") new for v4. include/linux/nmi.h | 5 ++- kernel/watchdog.c | 92 +++++++++++++++++++++++++++++++++--------- kernel/watchdog_perf.c | 42 +------------------ 3 files changed, 78 insertions(+), 61 deletions(-) diff --git a/include/linux/nmi.h b/include/linux/nmi.h index 35d09d70f394..c6cb9bc5dc80 100644 --- a/include/linux/nmi.h +++ b/include/linux/nmi.h @@ -15,7 +15,6 @@ void lockup_detector_init(void); void lockup_detector_soft_poweroff(void); void lockup_detector_cleanup(void); -bool is_hardlockup(void); extern int watchdog_user_enabled; extern int nmi_watchdog_user_enabled; @@ -88,6 +87,10 @@ extern unsigned int hardlockup_panic; static inline void hardlockup_detector_disable(void) {} #endif +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) +void watchdog_hardlockup_check(struct pt_regs *regs); +#endif + #if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR) # define NMI_WATCHDOG_SYSCTL_PERM 0644 #else diff --git a/kernel/watchdog.c b/kernel/watchdog.c index c705a18b26bf..2d319cdf64b9 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -85,6 +85,78 @@ __setup("nmi_watchdog=", hardlockup_panic_setup); #endif /* CONFIG_HARDLOCKUP_DETECTOR */ +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) + +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); +static DEFINE_PER_CPU(bool, hard_watchdog_warn); +static unsigned long hardlockup_allcpu_dumped; + +static bool watchdog_hardlockup_is_lockedup(void) +{ + unsigned long hrint = __this_cpu_read(hrtimer_interrupts); + + if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) + return true; + + __this_cpu_write(hrtimer_interrupts_saved, hrint); + return false; +} + +static void watchdog_hardlockup_interrupt_count(void) +{ + __this_cpu_inc(hrtimer_interrupts); +} + +void watchdog_hardlockup_check(struct pt_regs *regs) +{ + /* check for a hardlockup + * This is done by making sure our timer interrupt + * is incrementing. The timer interrupt should have + * fired multiple times before we overflow'd. If it hasn't + * then this is a good indication the cpu is stuck + */ + if (watchdog_hardlockup_is_lockedup()) { + int this_cpu = smp_processor_id(); + + /* only print hardlockups once */ + if (__this_cpu_read(hard_watchdog_warn) == true) + return; + + pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", + this_cpu); + print_modules(); + print_irqtrace_events(current); + if (regs) + show_regs(regs); + else + dump_stack(); + + /* + * Perform all-CPU dump only once to avoid multiple hardlockups + * generating interleaving traces + */ + if (sysctl_hardlockup_all_cpu_backtrace && + !test_and_set_bit(0, &hardlockup_allcpu_dumped)) + trigger_allbutself_cpu_backtrace(); + + if (hardlockup_panic) + nmi_panic(regs, "Hard LOCKUP"); + + __this_cpu_write(hard_watchdog_warn, true); + return; + } + + __this_cpu_write(hard_watchdog_warn, false); + return; +} + +#else /* CONFIG_HARDLOCKUP_DETECTOR_PERF */ + +static inline void watchdog_hardlockup_interrupt_count(void) { } + +#endif /* !CONFIG_HARDLOCKUP_DETECTOR_PERF */ + /* * These functions can be overridden if an architecture implements its * own hardlockup detector. @@ -176,8 +248,6 @@ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts); static DEFINE_PER_CPU(unsigned long, watchdog_report_ts); static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer); static DEFINE_PER_CPU(bool, softlockup_touch_sync); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); static unsigned long soft_lockup_nmi_warn; static int __init nowatchdog_setup(char *str) @@ -312,22 +382,6 @@ static int is_softlockup(unsigned long touch_ts, } /* watchdog detector functions */ -bool is_hardlockup(void) -{ - unsigned long hrint = __this_cpu_read(hrtimer_interrupts); - - if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) - return true; - - __this_cpu_write(hrtimer_interrupts_saved, hrint); - return false; -} - -static void watchdog_interrupt_count(void) -{ - __this_cpu_inc(hrtimer_interrupts); -} - static DEFINE_PER_CPU(struct completion, softlockup_completion); static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); @@ -359,7 +413,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) return HRTIMER_NORESTART; /* kick the hardlockup detector */ - watchdog_interrupt_count(); + watchdog_hardlockup_interrupt_count(); /* kick the softlockup detector */ if (completion_done(this_cpu_ptr(&softlockup_completion))) { diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c index c3d8ceb149da..5f3651b87ee7 100644 --- a/kernel/watchdog_perf.c +++ b/kernel/watchdog_perf.c @@ -20,13 +20,11 @@ #include #include -static DEFINE_PER_CPU(bool, hard_watchdog_warn); static DEFINE_PER_CPU(bool, watchdog_nmi_touch); static DEFINE_PER_CPU(struct perf_event *, watchdog_ev); static DEFINE_PER_CPU(struct perf_event *, dead_event); static struct cpumask dead_events_mask; -static unsigned long hardlockup_allcpu_dumped; static atomic_t watchdog_cpus = ATOMIC_INIT(0); notrace void arch_touch_nmi_watchdog(void) @@ -122,45 +120,7 @@ static void watchdog_overflow_callback(struct perf_event *event, if (!watchdog_check_timestamp()) return; - /* check for a hardlockup - * This is done by making sure our timer interrupt - * is incrementing. The timer interrupt should have - * fired multiple times before we overflow'd. If it hasn't - * then this is a good indication the cpu is stuck - */ - if (is_hardlockup()) { - int this_cpu = smp_processor_id(); - - /* only print hardlockups once */ - if (__this_cpu_read(hard_watchdog_warn) == true) - return; - - pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", - this_cpu); - print_modules(); - print_irqtrace_events(current); - if (regs) - show_regs(regs); - else - dump_stack(); - - /* - * Perform all-CPU dump only once to avoid multiple hardlockups - * generating interleaving traces - */ - if (sysctl_hardlockup_all_cpu_backtrace && - !test_and_set_bit(0, &hardlockup_allcpu_dumped)) - trigger_allbutself_cpu_backtrace(); - - if (hardlockup_panic) - nmi_panic(regs, "Hard LOCKUP"); - - __this_cpu_write(hard_watchdog_warn, true); - return; - } - - __this_cpu_write(hard_watchdog_warn, false); - return; + watchdog_hardlockup_check(regs); } static int hardlockup_detector_event_create(void) -- 2.40.1.521.gf1e218fcd8-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 662B1C77B73 for ; Thu, 4 May 2023 23:16:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7G6SBgSBdLzVaYlUgUPkGmXAbU0kQ3B2mVU0IinnPCc=; b=fhNHd9iyJfg78z y8i5UXaB2eyp0KVtB/e5NF+ISD+jrjgRl1i9XNeuqnPAFB5OSoQdiug01zjLsv1dUHI74VZZB1CLD itVucmUoKunPOKHbshk4udG7JUqYSJMWYWLfsIxOyqBze1i72B32CAvYTLy+Rh0IE49gmbIrTlFUd Jhrz6VCvf1U1RsRcCGRFjZlDTBdsrts2SpSIicZTFothHB6J+f8qti9OTpuevHpmNIUCAugpRtaA8 /EIvPgB5GOH/fw9S+7bp6rTpCErWcyTuGpzzTbjlFxbg5jT+e2VrjDxBrf0iCv1cRF+Zt74CdRAS2 feG4QCC6Or3HozPPNtmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1puiAu-009JPJ-2z; Thu, 04 May 2023 23:15:48 +0000 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1puhEA-009ASO-0M for linux-arm-kernel@lists.infradead.org; Thu, 04 May 2023 22:15:08 +0000 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-64115eef620so15685148b3a.1 for ; Thu, 04 May 2023 15:15:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=Z0Cg9qHletOFAvqqVmZw7+26EFSWZpwbZiJS2V2FzLapfu8jT4QeFu5W5+Xv1jW0Oq GuoA3ZwWRVBan5ARtRKt+4DdoQDQBVFcxWi+ugzNZIgxtDb+wtJKzKidIFwdlM1tvBKd TTJJddtrSqAKaOI+ubOmwuYJysH9mu4yCN9nw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683238506; x=1685830506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xpsfA6UKB47krdPEGD/PrzVbXvpRWznF4QQZA2MqbCU=; b=Gu+GGILLYq5thxB3+yYnaNc1fkP5B8lGf+HZAguCFGJnUX7Mv5+yTESjBKKQeOdsN8 LtRg+APAkz1dK+WBxgBZirKVcIC2BMlJyt9H9Av/C6dwUEzlDfuVRfUY8N46sQY1LljY r9Fm76ybrSCsKG75HlIVFeQCwW8TC9UK5GZ63Mw1o2Ir2SP885Eg0A84VLiGdhgPwZch oPava1y3YzlUTg9ffzomG0RQ6ILQ0TbKHN2+A9pmvZQglNsHiUoqWcJjYa3/RGR+/XdG iYxJ37TKAQKASstDDD2nXQP8CIorjESsT8M614wB547jtDyK5tBB2lb58LlYJHbfytCx /HTA== X-Gm-Message-State: AC+VfDxZ4yyXENktSo7niNpTc4DzGwoMFQyHlqRac/SCUez28wFNEQnW h6XZkDz1qeYRRkByZnohqCBsSA== X-Google-Smtp-Source: ACHHUZ64Z5QeW0ZZuBfj8VjVBHEXIEO7+gyHgZ2DgxLg3p6tdlwuBFv++p3xyyGQU6N9iOTo3WaeRQ== X-Received: by 2002:a05:6a20:938b:b0:f2:2185:8575 with SMTP id x11-20020a056a20938b00b000f221858575mr3718625pzh.23.1683238505737; Thu, 04 May 2023 15:15:05 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:edf0:7321:6b9e:d5e7]) by smtp.gmail.com with ESMTPSA id g26-20020aa7819a000000b006437c0edf9csm169615pfi.16.2023.05.04.15.15.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 15:15:04 -0700 (PDT) From: Douglas Anderson To: Petr Mladek , Andrew Morton Cc: Sumit Garg , Mark Rutland , Matthias Kaehlcke , Stephane Eranian , Stephen Boyd , ricardo.neri@intel.com, Tzung-Bi Shih , Lecopzer Chen , kgdb-bugreport@lists.sourceforge.net, Masayoshi Mizuma , Guenter Roeck , Pingfan Liu , Andi Kleen , Ian Rogers , linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, ito-yuichi@fujitsu.com, Randy Dunlap , Chen-Yu Tsai , christophe.leroy@csgroup.eu, davem@davemloft.net, sparclinux@vger.kernel.org, mpe@ellerman.id.au, Will Deacon , ravi.v.shankar@intel.com, npiggin@gmail.com, linuxppc-dev@lists.ozlabs.org, Marc Zyngier , Catalin Marinas , Daniel Thompson , Douglas Anderson Subject: [PATCH v4 07/17] watchdog/hardlockup: Move perf hardlockup checking/panic to common watchdog.c Date: Thu, 4 May 2023 15:13:39 -0700 Message-ID: <20230504151100.v4.7.Id4133d3183e798122dc3b6205e7852601f289071@changeid> X-Mailer: git-send-email 2.40.1.521.gf1e218fcd8-goog In-Reply-To: <20230504221349.1535669-1-dianders@chromium.org> References: <20230504221349.1535669-1-dianders@chromium.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230504_151506_159960_3DD410DD X-CRM114-Status: GOOD ( 33.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The perf hardlockup detector works by looking at interrupt counts and seeing if they change from run to run. The interrupt counts are managed by the common watchdog code via its watchdog_timer_fn(). Currently the API between the perf detector and the common code is a function: is_hardlockup(). When the hard lockup detector sees that function return true then it handles printing out debug info and inducing a panic if necessary. Let's change the API a little bit in preparation for the buddy hardlockup detector. The buddy hardlockup detector wants to print nearly the same debug info and have nearly the same panic behavior. That means we want to move all that code to the common file. For now, the code in the common file will only be there if the perf hardlockup detector is enabled, but eventually it will be selected by a common config. Right now, this _just_ moves the code from the perf detector file to the common file and changes the names. It doesn't make the changes that the buddy hardlockup detector will need and doesn't do any style cleanups. A future patch will do cleanup to make it more obvious what changed. With the above, we no longer have any callers of is_hardlockup() outside of the "watchdog.c" file, so we can remove it from the header, make it static, move it to the same "#ifdef" block as our new watchdog_hardlockup_check(), and rename it to make it obvious it's just for hardlockup detectors. While doing this, it can be noted that even if no hardlockup detectors were configured the existing code used to still have the code for counting/checking "hrtimer_interrupts" even if the perf hardlockup detector wasn't configured. We didn't need to do that, so move all the "hrtimer_interrupts" counting to only be there if the perf hardlockup detector is configured as well. This change is expected to be a no-op. Signed-off-by: Douglas Anderson --- Changes in v4: - ("Move perf hardlockup checking/panic ...") new for v4. include/linux/nmi.h | 5 ++- kernel/watchdog.c | 92 +++++++++++++++++++++++++++++++++--------- kernel/watchdog_perf.c | 42 +------------------ 3 files changed, 78 insertions(+), 61 deletions(-) diff --git a/include/linux/nmi.h b/include/linux/nmi.h index 35d09d70f394..c6cb9bc5dc80 100644 --- a/include/linux/nmi.h +++ b/include/linux/nmi.h @@ -15,7 +15,6 @@ void lockup_detector_init(void); void lockup_detector_soft_poweroff(void); void lockup_detector_cleanup(void); -bool is_hardlockup(void); extern int watchdog_user_enabled; extern int nmi_watchdog_user_enabled; @@ -88,6 +87,10 @@ extern unsigned int hardlockup_panic; static inline void hardlockup_detector_disable(void) {} #endif +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) +void watchdog_hardlockup_check(struct pt_regs *regs); +#endif + #if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR) # define NMI_WATCHDOG_SYSCTL_PERM 0644 #else diff --git a/kernel/watchdog.c b/kernel/watchdog.c index c705a18b26bf..2d319cdf64b9 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -85,6 +85,78 @@ __setup("nmi_watchdog=", hardlockup_panic_setup); #endif /* CONFIG_HARDLOCKUP_DETECTOR */ +#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF) + +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); +static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); +static DEFINE_PER_CPU(bool, hard_watchdog_warn); +static unsigned long hardlockup_allcpu_dumped; + +static bool watchdog_hardlockup_is_lockedup(void) +{ + unsigned long hrint = __this_cpu_read(hrtimer_interrupts); + + if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) + return true; + + __this_cpu_write(hrtimer_interrupts_saved, hrint); + return false; +} + +static void watchdog_hardlockup_interrupt_count(void) +{ + __this_cpu_inc(hrtimer_interrupts); +} + +void watchdog_hardlockup_check(struct pt_regs *regs) +{ + /* check for a hardlockup + * This is done by making sure our timer interrupt + * is incrementing. The timer interrupt should have + * fired multiple times before we overflow'd. If it hasn't + * then this is a good indication the cpu is stuck + */ + if (watchdog_hardlockup_is_lockedup()) { + int this_cpu = smp_processor_id(); + + /* only print hardlockups once */ + if (__this_cpu_read(hard_watchdog_warn) == true) + return; + + pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", + this_cpu); + print_modules(); + print_irqtrace_events(current); + if (regs) + show_regs(regs); + else + dump_stack(); + + /* + * Perform all-CPU dump only once to avoid multiple hardlockups + * generating interleaving traces + */ + if (sysctl_hardlockup_all_cpu_backtrace && + !test_and_set_bit(0, &hardlockup_allcpu_dumped)) + trigger_allbutself_cpu_backtrace(); + + if (hardlockup_panic) + nmi_panic(regs, "Hard LOCKUP"); + + __this_cpu_write(hard_watchdog_warn, true); + return; + } + + __this_cpu_write(hard_watchdog_warn, false); + return; +} + +#else /* CONFIG_HARDLOCKUP_DETECTOR_PERF */ + +static inline void watchdog_hardlockup_interrupt_count(void) { } + +#endif /* !CONFIG_HARDLOCKUP_DETECTOR_PERF */ + /* * These functions can be overridden if an architecture implements its * own hardlockup detector. @@ -176,8 +248,6 @@ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts); static DEFINE_PER_CPU(unsigned long, watchdog_report_ts); static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer); static DEFINE_PER_CPU(bool, softlockup_touch_sync); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); static unsigned long soft_lockup_nmi_warn; static int __init nowatchdog_setup(char *str) @@ -312,22 +382,6 @@ static int is_softlockup(unsigned long touch_ts, } /* watchdog detector functions */ -bool is_hardlockup(void) -{ - unsigned long hrint = __this_cpu_read(hrtimer_interrupts); - - if (__this_cpu_read(hrtimer_interrupts_saved) == hrint) - return true; - - __this_cpu_write(hrtimer_interrupts_saved, hrint); - return false; -} - -static void watchdog_interrupt_count(void) -{ - __this_cpu_inc(hrtimer_interrupts); -} - static DEFINE_PER_CPU(struct completion, softlockup_completion); static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); @@ -359,7 +413,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) return HRTIMER_NORESTART; /* kick the hardlockup detector */ - watchdog_interrupt_count(); + watchdog_hardlockup_interrupt_count(); /* kick the softlockup detector */ if (completion_done(this_cpu_ptr(&softlockup_completion))) { diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c index c3d8ceb149da..5f3651b87ee7 100644 --- a/kernel/watchdog_perf.c +++ b/kernel/watchdog_perf.c @@ -20,13 +20,11 @@ #include #include -static DEFINE_PER_CPU(bool, hard_watchdog_warn); static DEFINE_PER_CPU(bool, watchdog_nmi_touch); static DEFINE_PER_CPU(struct perf_event *, watchdog_ev); static DEFINE_PER_CPU(struct perf_event *, dead_event); static struct cpumask dead_events_mask; -static unsigned long hardlockup_allcpu_dumped; static atomic_t watchdog_cpus = ATOMIC_INIT(0); notrace void arch_touch_nmi_watchdog(void) @@ -122,45 +120,7 @@ static void watchdog_overflow_callback(struct perf_event *event, if (!watchdog_check_timestamp()) return; - /* check for a hardlockup - * This is done by making sure our timer interrupt - * is incrementing. The timer interrupt should have - * fired multiple times before we overflow'd. If it hasn't - * then this is a good indication the cpu is stuck - */ - if (is_hardlockup()) { - int this_cpu = smp_processor_id(); - - /* only print hardlockups once */ - if (__this_cpu_read(hard_watchdog_warn) == true) - return; - - pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", - this_cpu); - print_modules(); - print_irqtrace_events(current); - if (regs) - show_regs(regs); - else - dump_stack(); - - /* - * Perform all-CPU dump only once to avoid multiple hardlockups - * generating interleaving traces - */ - if (sysctl_hardlockup_all_cpu_backtrace && - !test_and_set_bit(0, &hardlockup_allcpu_dumped)) - trigger_allbutself_cpu_backtrace(); - - if (hardlockup_panic) - nmi_panic(regs, "Hard LOCKUP"); - - __this_cpu_write(hard_watchdog_warn, true); - return; - } - - __this_cpu_write(hard_watchdog_warn, false); - return; + watchdog_hardlockup_check(regs); } static int hardlockup_detector_event_create(void) -- 2.40.1.521.gf1e218fcd8-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel