From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9AAFC74A5B for ; Tue, 21 Mar 2023 13:48:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbjCUNsD convert rfc822-to-8bit (ORCPT ); Tue, 21 Mar 2023 09:48:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229475AbjCUNsB (ORCPT ); Tue, 21 Mar 2023 09:48:01 -0400 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51D0B43469; Tue, 21 Mar 2023 06:47:59 -0700 (PDT) Received: by mail-ed1-f50.google.com with SMTP id r11so59914962edd.5; Tue, 21 Mar 2023 06:47:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679406478; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vvR60g3tFXEBQYgwVwbIXJyQcGxQhx5VhvptYm3eERQ=; b=lB4+cmzGe4Aon1YQpwAZly0x5SkZfgvy92MYuRQPGOHlOM2cFx3UV+hlVJwCC+eSEH GsPkk29IE4lZPBXNbcz2aO5xMelb5G1tUOpX01AiV79BrF0Tsr4rS2PkmEIDeWe2WaP9 nTcsx+lKyStaDpFNq/BwS2gw5b0vBFmnp3Sv0ufhJ28670+J63XjcMN8G++wbyt6N86t ivoiabKwxoOm6qt6uLLbhVScoyio/8IkDlf5xMWCwGBlyyONaScMh/4hD86Zol6vz75A EK4BOeUvUKN1/u3cR5oWsUUa8gtwWsQhiavz6Qil/2gOgAxXO24em3fvUti/IOEhBL7M mYHQ== X-Gm-Message-State: AO0yUKVTIG1KBTkmgURRK0CemURiy55PC6PinoPZn36ClJVUEbf88lAj qfSup2aFhFBGRjvMinZ+yumyIBg+1aJupPWB+ag= X-Google-Smtp-Source: AK7set/h/NZ/VGvn4gefABgUQDp9XIpghr+brUfqQ2nYQUkGxKqbJrSGm8oBcbUzqE6/gClu2VFHaojfHaqc9llz62M= X-Received: by 2002:a50:cd1d:0:b0:4fc:8749:cd77 with SMTP id z29-20020a50cd1d000000b004fc8749cd77mr1690914edi.3.1679406477635; Tue, 21 Mar 2023 06:47:57 -0700 (PDT) MIME-Version: 1.0 References: <20230316164257.42590-1-roger.pau@citrix.com> In-Reply-To: <20230316164257.42590-1-roger.pau@citrix.com> From: "Rafael J. Wysocki" Date: Tue, 21 Mar 2023 14:47:46 +0100 Message-ID: Subject: Re: [PATCH v4] acpi/processor: fix evaluating _PDC method when running as Xen dom0 To: Roger Pau Monne Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, josef@oderland.se, Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , "Rafael J. Wysocki" , Len Brown , Stefano Stabellini , Oleksandr Tyshchenko , Venkatesh Pallipadi , Alex Chiang , linux-acpi@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8BIT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 16, 2023 at 5:43 PM Roger Pau Monne wrote: > > In ACPI systems, the OS can direct power management, as opposed to the > firmware. This OS-directed Power Management is called OSPM. Part of > telling the firmware that the OS going to direct power management is > making ACPI "_PDC" (Processor Driver Capabilities) calls. These _PDC > methods must be evaluated for every processor object. If these _PDC > calls are not completed for every processor it can lead to > inconsistency and later failures in things like the CPU frequency > driver. > > In a Xen system, the dom0 kernel is responsible for system-wide power > management. The dom0 kernel is in charge of OSPM. However, the > number of CPUs available to dom0 can be different than the number of > CPUs physically present on the system. > > This leads to a problem: the dom0 kernel needs to evaluate _PDC for > all the processors, but it can't always see them. > > In dom0 kernels, ignore the existing ACPI method for determining if a > processor is physically present because it might not be accurate. > Instead, ask the hypervisor for this information. > > Fix this by introducing a custom function to use when running as Xen > dom0 in order to check whether a processor object matches a CPU that's > online. Such checking is done using the existing information fetched > by the Xen pCPU subsystem, extending it to also store the ACPI ID. > > This ensures that _PDC method gets evaluated for all physically online > CPUs, regardless of the number of CPUs made available to dom0. > > Fixes: 5d554a7bb064 ('ACPI: processor: add internal processor_physically_present()') > Signed-off-by: Roger Pau Monné > --- > Changes since v3: > - Protect xen_processor_present() definition with CONFIG_ACPI. > > Changes since v2: > - Extend and use the existing pcpu functionality. > > Changes since v1: > - Reword commit message. > --- > arch/x86/include/asm/xen/hypervisor.h | 10 ++++++++++ > drivers/acpi/processor_pdc.c | 11 +++++++++++ > drivers/xen/pcpu.c | 21 +++++++++++++++++++++ > 3 files changed, 42 insertions(+) > > diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h > index 5fc35f889cd1..990a1609677e 100644 > --- a/arch/x86/include/asm/xen/hypervisor.h > +++ b/arch/x86/include/asm/xen/hypervisor.h > @@ -63,4 +63,14 @@ void __init xen_pvh_init(struct boot_params *boot_params); > void __init mem_map_via_hcall(struct boot_params *boot_params_p); > #endif > > +#if defined(CONFIG_XEN_DOM0) && defined(CONFIG_ACPI) > +bool __init xen_processor_present(uint32_t acpi_id); > +#else > +static inline bool xen_processor_present(uint32_t acpi_id) > +{ > + BUG(); > + return false; > +} > +#endif > + > #endif /* _ASM_X86_XEN_HYPERVISOR_H */ > diff --git a/drivers/acpi/processor_pdc.c b/drivers/acpi/processor_pdc.c > index 8c3f82c9fff3..18fb04523f93 100644 > --- a/drivers/acpi/processor_pdc.c > +++ b/drivers/acpi/processor_pdc.c > @@ -14,6 +14,8 @@ > #include > #include > > +#include This along with the definition above is evidently insufficient for xen_processor_present() to always be defined. See https://lore.kernel.org/linux-acpi/64198b60.bO+m9o5w+Hd8hcF3%25lkp@intel.com/T/#u for example. I'm dropping the patch now, please fix and resend. > + > #include "internal.h" > > static bool __init processor_physically_present(acpi_handle handle) > @@ -47,6 +49,15 @@ static bool __init processor_physically_present(acpi_handle handle) > return false; > } > > + if (xen_initial_domain()) > + /* > + * When running as a Xen dom0 the number of processors Linux > + * sees can be different from the real number of processors on > + * the system, and we still need to execute _PDC for all of > + * them. > + */ > + return xen_processor_present(acpi_id); > + > type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0; > cpuid = acpi_get_cpuid(handle, type, acpi_id); > > diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c > index fd3a644b0855..034d05e56507 100644 > --- a/drivers/xen/pcpu.c > +++ b/drivers/xen/pcpu.c > @@ -58,6 +58,7 @@ struct pcpu { > struct list_head list; > struct device dev; > uint32_t cpu_id; > + uint32_t acpi_id; > uint32_t flags; > }; > > @@ -249,6 +250,7 @@ static struct pcpu *create_and_register_pcpu(struct xenpf_pcpuinfo *info) > > INIT_LIST_HEAD(&pcpu->list); > pcpu->cpu_id = info->xen_cpuid; > + pcpu->acpi_id = info->acpi_id; > pcpu->flags = info->flags; > > /* Need hold on xen_pcpu_lock before pcpu list manipulations */ > @@ -381,3 +383,22 @@ static int __init xen_pcpu_init(void) > return ret; > } > arch_initcall(xen_pcpu_init); > + > +#ifdef CONFIG_ACPI > +bool __init xen_processor_present(uint32_t acpi_id) > +{ > + struct pcpu *pcpu; > + bool online = false; > + > + mutex_lock(&xen_pcpu_lock); > + list_for_each_entry(pcpu, &xen_pcpus, list) > + if (pcpu->acpi_id == acpi_id) { > + online = pcpu->flags & XEN_PCPU_FLAGS_ONLINE; > + break; > + } > + > + mutex_unlock(&xen_pcpu_lock); > + > + return online; > +} > +#endif > -- > 2.39.0 >