From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE524C43461 for ; Tue, 4 May 2021 19:07:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B468613BC for ; Tue, 4 May 2021 19:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232306AbhEDTIe (ORCPT ); Tue, 4 May 2021 15:08:34 -0400 Received: from mga17.intel.com ([192.55.52.151]:38652 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232017AbhEDTIW (ORCPT ); Tue, 4 May 2021 15:08:22 -0400 IronPort-SDR: iJptJc/X0WnOrcVktc1K8KN5Pu8K/+OqFwzBo6UG0TEnD/dLhnXov9rBLF1EdZ22eVaH6aeokv foCnKchuJ+dw== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="178269896" X-IronPort-AV: E=Sophos;i="5.82,272,1613462400"; d="scan'208";a="178269896" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 12:07:17 -0700 IronPort-SDR: wjNMoov9c+GguXj5pBngoEHXDH8KGYcgcNkIcC6LU+21ivm/mv0aEh7Dy5YotYybtCYbyFuDxO OT77OjghZKiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,272,1613462400"; d="scan'208";a="618591733" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by fmsmga006.fm.intel.com with ESMTP; 04 May 2021 12:07:16 -0700 From: Ricardo Neri To: Thomas Gleixner , Ingo Molnar , Borislav Petkov Cc: "H. Peter Anvin" , Ashok Raj , Andi Kleen , Tony Luck , Nicholas Piggin , "Peter Zijlstra (Intel)" , Andrew Morton , Stephane Eranian , Suravee Suthikulpanit , "Ravi V. Shankar" , Ricardo Neri , x86@kernel.org, linux-kernel@vger.kernel.org, Ricardo Neri , Andi Kleen Subject: [RFC PATCH v5 06/16] x86/nmi: Add an NMI_WATCHDOG NMI handler category Date: Tue, 4 May 2021 12:05:16 -0700 Message-Id: <20210504190526.22347-7-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210504190526.22347-1-ricardo.neri-calderon@linux.intel.com> References: <20210504190526.22347-1-ricardo.neri-calderon@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a NMI_WATCHDOG as a new category of NMI handler. This new category is to be used with the HPET-based hardlockup detector. This detector does not have a direct way of checking if the HPET timer is the source of the NMI. Instead, it indirectly estimates it using the time-stamp counter. Therefore, we may have false-positives in case another NMI occurs within the estimated time window. For this reason, we want the handler of the detector to be called after all the NMI_LOCAL handlers. A simple way of achieving this with a new NMI handler category. Cc: "H. Peter Anvin" Cc: Ashok Raj Cc: Andi Kleen Cc: Tony Luck Cc: Peter Zijlstra Cc: Andrew Morton Cc: "Ravi V. Shankar" Cc: x86@kernel.org Signed-off-by: Ricardo Neri --- Changes since v4: * None Changes since v3: * None Changes since v2: * Introduced this patch. Changes since v1: * N/A --- arch/x86/include/asm/nmi.h | 1 + arch/x86/kernel/nmi.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h index 1cb9c17a4cb4..4a0d5b562c91 100644 --- a/arch/x86/include/asm/nmi.h +++ b/arch/x86/include/asm/nmi.h @@ -28,6 +28,7 @@ enum { NMI_UNKNOWN, NMI_SERR, NMI_IO_CHECK, + NMI_WATCHDOG, NMI_MAX }; diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index bf250a339655..5016bc45e16c 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -61,6 +61,10 @@ static struct nmi_desc nmi_desc[NMI_MAX] = .lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[3].lock), .head = LIST_HEAD_INIT(nmi_desc[3].head), }, + { + .lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[4].lock), + .head = LIST_HEAD_INIT(nmi_desc[4].head), + }, }; @@ -168,6 +172,8 @@ int __register_nmi_handler(unsigned int type, struct nmiaction *action) */ WARN_ON_ONCE(type == NMI_SERR && !list_empty(&desc->head)); WARN_ON_ONCE(type == NMI_IO_CHECK && !list_empty(&desc->head)); + WARN_ON_ONCE(type == NMI_WATCHDOG && !list_empty(&desc->head)); + /* * some handlers need to be executed first otherwise a fake @@ -380,6 +386,10 @@ static noinstr void default_do_nmi(struct pt_regs *regs) } raw_spin_unlock(&nmi_reason_lock); + handled = nmi_handle(NMI_WATCHDOG, regs); + if (handled == NMI_HANDLED) + return; + /* * Only one NMI can be latched at a time. To handle * this we may process multiple nmi handlers at once to -- 2.17.1