* [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable
@ 2024-04-23 8:45 Sergiy Kibrik
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
` (6 more replies)
0 siblings, 7 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:45 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Here's an attempt to further separate support of Intel & AMD CPUs in Xen build.
The code to drive both platforms used to be built unconditionally, until recent
introduction of CONFIG_{AMD,INTEL} Kconfig options.
This series extends coverage of these options on vpmu and mcheck subsystems,
that is not building Intel or AMD vpmu/mcheck support if CPU vendor's support
was explicitly disabled.
Sergiy Kibrik (7):
x86/vpmu: separate amd/intel vPMU code
x86/intel: guard vmce_has_lmce() with INTEL option
x86/MCE: guard access to Intel/AMD-specific MCA MSRs
x86/MCE: guard lmce_support/cmci_support
x86/MCE: guard {intel/amd}_mcheck_init() calls
x86/MCE: guard call to Intel-specific intel_get_extended_msrs()
x86/MCE: optional build of AMD/Intel MCE code
xen/arch/x86/cpu/Makefile | 4 +++-
xen/arch/x86/cpu/mcheck/Makefile | 6 ++----
xen/arch/x86/cpu/mcheck/mce.c | 13 ++++++++-----
xen/arch/x86/cpu/mcheck/non-fatal.c | 6 ++++++
xen/arch/x86/cpu/mcheck/vmce.c | 18 +++++++++++-------
xen/arch/x86/cpu/mcheck/vmce.h | 1 +
xen/arch/x86/include/asm/vpmu.h | 19 +++++++++++++++++++
xen/arch/x86/msr.c | 2 +-
8 files changed, 51 insertions(+), 18 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
@ 2024-04-23 8:48 ` Sergiy Kibrik
2024-04-26 23:00 ` Stefano Stabellini
2024-04-29 15:28 ` Jan Beulich
2024-04-23 8:50 ` [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option Sergiy Kibrik
` (5 subsequent siblings)
6 siblings, 2 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:48 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Roger Pau Monné,
Jan Beulich, Andrew Cooper, Stefano Stabellini
Build AMD vPMU when CONFIG_AMD is on, and Intel vPMU when CONFIG_INTEL
is on respectively, allowing for a plaftorm-specific build. Also separate
arch_vpmu_ops initializers using these options and static inline stubs.
No functional change intended.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v1:
- switch to CONFIG_{AMD,INTEL} instead of CONFIG_{SVM,VMX}
xen/arch/x86/cpu/Makefile | 4 +++-
xen/arch/x86/include/asm/vpmu.h | 19 +++++++++++++++++++
2 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile
index 35561fe51d..eafce5f204 100644
--- a/xen/arch/x86/cpu/Makefile
+++ b/xen/arch/x86/cpu/Makefile
@@ -10,4 +10,6 @@ obj-y += intel.o
obj-y += intel_cacheinfo.o
obj-y += mwait-idle.o
obj-y += shanghai.o
-obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
+obj-y += vpmu.o
+obj-$(CONFIG_AMD) += vpmu_amd.o
+obj-$(CONFIG_INTEL) += vpmu_intel.o
diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h
index dae9b43dac..e7a8f211f8 100644
--- a/xen/arch/x86/include/asm/vpmu.h
+++ b/xen/arch/x86/include/asm/vpmu.h
@@ -11,6 +11,7 @@
#define __ASM_X86_HVM_VPMU_H_
#include <public/pmu.h>
+#include <xen/err.h>
#define vcpu_vpmu(vcpu) (&(vcpu)->arch.vpmu)
#define vpmu_vcpu(vpmu) container_of((vpmu), struct vcpu, arch.vpmu)
@@ -42,9 +43,27 @@ struct arch_vpmu_ops {
#endif
};
+#ifdef CONFIG_INTEL
const struct arch_vpmu_ops *core2_vpmu_init(void);
+#else
+static inline const struct arch_vpmu_ops *core2_vpmu_init(void)
+{
+ return ERR_PTR(-ENODEV);
+}
+#endif
+#ifdef CONFIG_AMD
const struct arch_vpmu_ops *amd_vpmu_init(void);
const struct arch_vpmu_ops *hygon_vpmu_init(void);
+#else
+static inline const struct arch_vpmu_ops *amd_vpmu_init(void)
+{
+ return ERR_PTR(-ENODEV);
+}
+static inline const struct arch_vpmu_ops *hygon_vpmu_init(void)
+{
+ return ERR_PTR(-ENODEV);
+}
+#endif
struct vpmu_struct {
u32 flags;
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
@ 2024-04-23 8:50 ` Sergiy Kibrik
2024-04-26 23:04 ` Stefano Stabellini
2024-04-29 15:34 ` Jan Beulich
2024-04-23 8:52 ` [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
` (4 subsequent siblings)
6 siblings, 2 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:50 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Since MCG_LMCE_P bit is specific to Intel CPUs the code to check it can
possibly be excluded from build if !CONFIG_INTEL. With these guards
calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
not being built.
Also replace boilerplate code that checks for MCG_LMCE_P flag with
vmce_has_lmce(), which might contribute to readability a bit.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/vmce.c | 4 ++--
xen/arch/x86/msr.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 353d4f19b2..c437f62c0a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
* bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
* does not need to check them here.
*/
- if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
+ if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )
{
*val = cur->arch.vmce.mcg_ext_ctl;
mce_printk(MCE_VERBOSE, "MCE: %pv: rd MCG_EXT_CTL %#"PRIx64"\n",
@@ -324,7 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
break;
case MSR_IA32_MCG_EXT_CTL:
- if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
+ if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
!(val & ~MCG_EXT_CTL_LMCE_EN) )
cur->arch.vmce.mcg_ext_ctl = val;
else
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9babd441f9..4010c87d93 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -86,7 +86,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
goto gp_fault;
*val = IA32_FEATURE_CONTROL_LOCK;
- if ( vmce_has_lmce(v) )
+ if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
*val |= IA32_FEATURE_CONTROL_LMCE_ON;
if ( cp->basic.vmx )
*val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
2024-04-23 8:50 ` [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option Sergiy Kibrik
@ 2024-04-23 8:52 ` Sergiy Kibrik
2024-04-26 23:08 ` Stefano Stabellini
2024-04-29 15:41 ` Jan Beulich
2024-04-23 8:54 ` [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support Sergiy Kibrik
` (3 subsequent siblings)
6 siblings, 2 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:52 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Add build-time checks for newly introduced INTEL/AMD config options when
calling vmce_{intel/amd}_{rdmsr/wrmsr}() routines.
This way a platform-specific code can be omitted in vmce code, if this
platform is disabled in config.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/vmce.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c437f62c0a..be229684a4 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -141,12 +141,14 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
case X86_VENDOR_CENTAUR:
case X86_VENDOR_SHANGHAI:
case X86_VENDOR_INTEL:
- ret = vmce_intel_rdmsr(v, msr, val);
+ ret = IS_ENABLED(CONFIG_INTEL) ?
+ vmce_intel_rdmsr(v, msr, val) : -ENODEV;
break;
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
- ret = vmce_amd_rdmsr(v, msr, val);
+ ret = IS_ENABLED(CONFIG_AMD) ?
+ vmce_amd_rdmsr(v, msr, val) : -ENODEV;
break;
default:
@@ -272,12 +274,14 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
switch ( boot_cpu_data.x86_vendor )
{
case X86_VENDOR_INTEL:
- ret = vmce_intel_wrmsr(v, msr, val);
+ ret = IS_ENABLED(CONFIG_INTEL) ?
+ vmce_intel_wrmsr(v, msr, val) : -ENODEV;
break;
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
- ret = vmce_amd_wrmsr(v, msr, val);
+ ret = IS_ENABLED(CONFIG_AMD) ?
+ vmce_amd_wrmsr(v, msr, val) : -ENODEV;
break;
default:
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
` (2 preceding siblings ...)
2024-04-23 8:52 ` [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
@ 2024-04-23 8:54 ` Sergiy Kibrik
2024-04-26 23:10 ` Stefano Stabellini
2024-04-29 15:42 ` Jan Beulich
2024-04-23 8:56 ` [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
` (2 subsequent siblings)
6 siblings, 2 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:54 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Guard access to Intel-specific lmce_support & cmci_support variables in
common MCE/VMCE code. These are set in Intel-specific parts of mcheck code
and can potentially be skipped if building for non-intel platform by
disabling CONFIG_INTEL option.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/mce.c | 4 ++--
xen/arch/x86/cpu/mcheck/vmce.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 2844685983..72dfaf28cb 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -611,7 +611,7 @@ static void set_poll_bankmask(struct cpuinfo_x86 *c)
mb = per_cpu(poll_bankmask, cpu);
BUG_ON(!mb);
- if ( cmci_support && opt_mce )
+ if ( IS_ENABLED(CONFIG_INTEL) && cmci_support && opt_mce )
{
const struct mca_banks *cmci = per_cpu(no_cmci_banks, cpu);
@@ -1607,7 +1607,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
break;
case XEN_MC_INJECT_TYPE_LMCE:
- if ( !lmce_support )
+ if ( IS_ENABLED(CONFIG_INTEL) && !lmce_support )
{
ret = x86_mcerr("No LMCE support", -EINVAL);
break;
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index be229684a4..6051ab2b2e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -546,7 +546,7 @@ int vmce_enable_mca_cap(struct domain *d, uint64_t cap)
if ( cap & XEN_HVM_MCA_CAP_LMCE )
{
- if ( !lmce_support )
+ if ( IS_ENABLED(CONFIG_INTEL) && !lmce_support )
return -EINVAL;
for_each_vcpu(d, v)
v->arch.vmce.mcg_cap |= MCG_LMCE_P;
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
` (3 preceding siblings ...)
2024-04-23 8:54 ` [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support Sergiy Kibrik
@ 2024-04-23 8:56 ` Sergiy Kibrik
2024-04-26 23:12 ` Stefano Stabellini
2024-04-29 15:45 ` Jan Beulich
2024-04-23 8:58 ` [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs() Sergiy Kibrik
2024-04-23 9:00 ` [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
6 siblings, 2 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:56 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Guard calls to CPU-specific mcheck init routines in common MCE code
using new INTEL/AMD config options.
The purpose is not to build platform-specific mcheck code and calls to it,
if this platform is disabled in config.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/mce.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 72dfaf28cb..42e84e76b7 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -761,7 +761,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
{
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
- inited = amd_mcheck_init(c, bsp);
+ inited = IS_ENABLED(CONFIG_AMD) ? amd_mcheck_init(c, bsp) :
+ mcheck_unset;
break;
case X86_VENDOR_INTEL:
@@ -769,7 +770,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
{
case 6:
case 15:
- inited = intel_mcheck_init(c, bsp);
+ inited = IS_ENABLED(CONFIG_INTEL) ? intel_mcheck_init(c, bsp) :
+ mcheck_unset;
break;
}
break;
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs()
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
` (4 preceding siblings ...)
2024-04-23 8:56 ` [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
@ 2024-04-23 8:58 ` Sergiy Kibrik
2024-04-26 23:14 ` Stefano Stabellini
2024-04-23 9:00 ` [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
6 siblings, 1 reply; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 8:58 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Add check for CONFIG_INTEL build option to conditional call of this routine,
so that if Intel support is disabled the call would be eliminated.
No functional change intended.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/mce.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 42e84e76b7..de2e16a6e2 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -328,7 +328,8 @@ mcheck_mca_logout(enum mca_source who, struct mca_banks *bankmask,
ASSERT(mig);
mca_init_global(mc_flags, mig);
/* A hook here to get global extended msrs */
- if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+ if ( IS_ENABLED(CONFIG_INTEL) &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
intel_get_extended_msrs(mig, mci);
}
}
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
` (5 preceding siblings ...)
2024-04-23 8:58 ` [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs() Sergiy Kibrik
@ 2024-04-23 9:00 ` Sergiy Kibrik
2024-04-26 23:16 ` Stefano Stabellini
6 siblings, 1 reply; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-23 9:00 UTC (permalink / raw)
To: xen-devel
Cc: Sergiy Kibrik, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
Separate Intel/AMD-specific MCE code using CONFIG_{INTEL,AMD} config options.
Now we can avoid build of mcheck code if support for specific platform is
intentionally disabled by configuration.
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
xen/arch/x86/cpu/mcheck/Makefile | 6 ++----
xen/arch/x86/cpu/mcheck/non-fatal.c | 6 ++++++
xen/arch/x86/cpu/mcheck/vmce.h | 1 +
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/Makefile b/xen/arch/x86/cpu/mcheck/Makefile
index f927f10b4d..5b3f6d875c 100644
--- a/xen/arch/x86/cpu/mcheck/Makefile
+++ b/xen/arch/x86/cpu/mcheck/Makefile
@@ -1,12 +1,10 @@
-obj-y += amd_nonfatal.o
-obj-y += mce_amd.o
obj-y += mcaction.o
obj-y += barrier.o
-obj-y += intel-nonfatal.o
obj-y += mctelem.o
obj-y += mce.o
obj-y += mce-apei.o
-obj-y += mce_intel.o
+obj-$(CONFIG_AMD) += mce_amd.o amd_nonfatal.o
+obj-$(CONFIG_INTEL) += mce_intel.o intel-nonfatal.o
obj-y += non-fatal.o
obj-y += util.o
obj-y += vmce.o
diff --git a/xen/arch/x86/cpu/mcheck/non-fatal.c b/xen/arch/x86/cpu/mcheck/non-fatal.c
index 33cacd15c2..2d91a3b1e0 100644
--- a/xen/arch/x86/cpu/mcheck/non-fatal.c
+++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
@@ -24,14 +24,20 @@ static int __init cf_check init_nonfatal_mce_checker(void)
* Check for non-fatal errors every MCE_RATE s
*/
switch (c->x86_vendor) {
+#ifdef CONFIG_AMD
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
/* Assume we are on K8 or newer AMD or Hygon CPU here */
amd_nonfatal_mcheck_init(c);
break;
+#endif
+#ifdef CONFIG_INTEL
case X86_VENDOR_INTEL:
intel_nonfatal_mcheck_init(c);
break;
+#endif
+ default:
+ return -ENODEV;
}
printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
return 0;
--
2.25.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
@ 2024-04-26 23:00 ` Stefano Stabellini
2024-04-29 15:28 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:00 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Roger Pau Monné,
Jan Beulich, Andrew Cooper, Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Build AMD vPMU when CONFIG_AMD is on, and Intel vPMU when CONFIG_INTEL
> is on respectively, allowing for a plaftorm-specific build. Also separate
> arch_vpmu_ops initializers using these options and static inline stubs.
>
> No functional change intended.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
>
> ---
> changes in v1:
> - switch to CONFIG_{AMD,INTEL} instead of CONFIG_{SVM,VMX}
>
>
> xen/arch/x86/cpu/Makefile | 4 +++-
> xen/arch/x86/include/asm/vpmu.h | 19 +++++++++++++++++++
> 2 files changed, 22 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile
> index 35561fe51d..eafce5f204 100644
> --- a/xen/arch/x86/cpu/Makefile
> +++ b/xen/arch/x86/cpu/Makefile
> @@ -10,4 +10,6 @@ obj-y += intel.o
> obj-y += intel_cacheinfo.o
> obj-y += mwait-idle.o
> obj-y += shanghai.o
> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
> +obj-y += vpmu.o
> +obj-$(CONFIG_AMD) += vpmu_amd.o
> +obj-$(CONFIG_INTEL) += vpmu_intel.o
> diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h
> index dae9b43dac..e7a8f211f8 100644
> --- a/xen/arch/x86/include/asm/vpmu.h
> +++ b/xen/arch/x86/include/asm/vpmu.h
> @@ -11,6 +11,7 @@
> #define __ASM_X86_HVM_VPMU_H_
>
> #include <public/pmu.h>
> +#include <xen/err.h>
>
> #define vcpu_vpmu(vcpu) (&(vcpu)->arch.vpmu)
> #define vpmu_vcpu(vpmu) container_of((vpmu), struct vcpu, arch.vpmu)
> @@ -42,9 +43,27 @@ struct arch_vpmu_ops {
> #endif
> };
>
> +#ifdef CONFIG_INTEL
> const struct arch_vpmu_ops *core2_vpmu_init(void);
> +#else
> +static inline const struct arch_vpmu_ops *core2_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +#endif
> +#ifdef CONFIG_AMD
> const struct arch_vpmu_ops *amd_vpmu_init(void);
> const struct arch_vpmu_ops *hygon_vpmu_init(void);
> +#else
> +static inline const struct arch_vpmu_ops *amd_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +static inline const struct arch_vpmu_ops *hygon_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +#endif
At some point we'll need to discuss how to separate out hygon as well.
But for now:
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option
2024-04-23 8:50 ` [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option Sergiy Kibrik
@ 2024-04-26 23:04 ` Stefano Stabellini
2024-04-29 15:34 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:04 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Since MCG_LMCE_P bit is specific to Intel CPUs the code to check it can
> possibly be excluded from build if !CONFIG_INTEL. With these guards
> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
> not being built.
>
> Also replace boilerplate code that checks for MCG_LMCE_P flag with
> vmce_has_lmce(), which might contribute to readability a bit.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs
2024-04-23 8:52 ` [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
@ 2024-04-26 23:08 ` Stefano Stabellini
2024-04-29 15:41 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:08 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Add build-time checks for newly introduced INTEL/AMD config options when
> calling vmce_{intel/amd}_{rdmsr/wrmsr}() routines.
> This way a platform-specific code can be omitted in vmce code, if this
> platform is disabled in config.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support
2024-04-23 8:54 ` [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support Sergiy Kibrik
@ 2024-04-26 23:10 ` Stefano Stabellini
2024-04-29 15:42 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:10 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Guard access to Intel-specific lmce_support & cmci_support variables in
> common MCE/VMCE code. These are set in Intel-specific parts of mcheck code
> and can potentially be skipped if building for non-intel platform by
> disabling CONFIG_INTEL option.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls
2024-04-23 8:56 ` [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
@ 2024-04-26 23:12 ` Stefano Stabellini
2024-04-29 15:45 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:12 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Guard calls to CPU-specific mcheck init routines in common MCE code
> using new INTEL/AMD config options.
>
> The purpose is not to build platform-specific mcheck code and calls to it,
> if this platform is disabled in config.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs()
2024-04-23 8:58 ` [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs() Sergiy Kibrik
@ 2024-04-26 23:14 ` Stefano Stabellini
2024-04-29 15:47 ` Jan Beulich
0 siblings, 1 reply; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:14 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Add check for CONFIG_INTEL build option to conditional call of this routine,
> so that if Intel support is disabled the call would be eliminated.
>
> No functional change intended.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code
2024-04-23 9:00 ` [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
@ 2024-04-26 23:16 ` Stefano Stabellini
2024-04-29 15:54 ` Jan Beulich
0 siblings, 1 reply; 26+ messages in thread
From: Stefano Stabellini @ 2024-04-26 23:16 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Jan Beulich, Andrew Cooper, Roger Pau Monné,
Stefano Stabellini
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Separate Intel/AMD-specific MCE code using CONFIG_{INTEL,AMD} config options.
> Now we can avoid build of mcheck code if support for specific platform is
> intentionally disabled by configuration.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> ---
> xen/arch/x86/cpu/mcheck/Makefile | 6 ++----
> xen/arch/x86/cpu/mcheck/non-fatal.c | 6 ++++++
> xen/arch/x86/cpu/mcheck/vmce.h | 1 +
> 3 files changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/mcheck/Makefile b/xen/arch/x86/cpu/mcheck/Makefile
> index f927f10b4d..5b3f6d875c 100644
> --- a/xen/arch/x86/cpu/mcheck/Makefile
> +++ b/xen/arch/x86/cpu/mcheck/Makefile
> @@ -1,12 +1,10 @@
> -obj-y += amd_nonfatal.o
> -obj-y += mce_amd.o
> obj-y += mcaction.o
> obj-y += barrier.o
> -obj-y += intel-nonfatal.o
> obj-y += mctelem.o
> obj-y += mce.o
> obj-y += mce-apei.o
> -obj-y += mce_intel.o
> +obj-$(CONFIG_AMD) += mce_amd.o amd_nonfatal.o
> +obj-$(CONFIG_INTEL) += mce_intel.o intel-nonfatal.o
> obj-y += non-fatal.o
> obj-y += util.o
> obj-y += vmce.o
Awesome!
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> diff --git a/xen/arch/x86/cpu/mcheck/non-fatal.c b/xen/arch/x86/cpu/mcheck/non-fatal.c
> index 33cacd15c2..2d91a3b1e0 100644
> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
> @@ -24,14 +24,20 @@ static int __init cf_check init_nonfatal_mce_checker(void)
> * Check for non-fatal errors every MCE_RATE s
> */
> switch (c->x86_vendor) {
> +#ifdef CONFIG_AMD
> case X86_VENDOR_AMD:
> case X86_VENDOR_HYGON:
> /* Assume we are on K8 or newer AMD or Hygon CPU here */
> amd_nonfatal_mcheck_init(c);
> break;
> +#endif
> +#ifdef CONFIG_INTEL
> case X86_VENDOR_INTEL:
> intel_nonfatal_mcheck_init(c);
> break;
> +#endif
> + default:
> + return -ENODEV;
> }
> printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
> return 0;
For consistency in all other cases this patch series uses IS_ENABLED
checks. They could be used here as well.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
2024-04-26 23:00 ` Stefano Stabellini
@ 2024-04-29 15:28 ` Jan Beulich
2024-04-30 9:07 ` Sergiy Kibrik
1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:28 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: Roger Pau Monné, Andrew Cooper, Stefano Stabellini, xen-devel
On 23.04.2024 10:48, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/include/asm/vpmu.h
> +++ b/xen/arch/x86/include/asm/vpmu.h
> @@ -11,6 +11,7 @@
> #define __ASM_X86_HVM_VPMU_H_
>
> #include <public/pmu.h>
> +#include <xen/err.h>
>
> #define vcpu_vpmu(vcpu) (&(vcpu)->arch.vpmu)
> #define vpmu_vcpu(vpmu) container_of((vpmu), struct vcpu, arch.vpmu)
> @@ -42,9 +43,27 @@ struct arch_vpmu_ops {
> #endif
> };
>
> +#ifdef CONFIG_INTEL
> const struct arch_vpmu_ops *core2_vpmu_init(void);
> +#else
> +static inline const struct arch_vpmu_ops *core2_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +#endif
> +#ifdef CONFIG_AMD
> const struct arch_vpmu_ops *amd_vpmu_init(void);
> const struct arch_vpmu_ops *hygon_vpmu_init(void);
> +#else
> +static inline const struct arch_vpmu_ops *amd_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +static inline const struct arch_vpmu_ops *hygon_vpmu_init(void)
> +{
> + return ERR_PTR(-ENODEV);
> +}
> +#endif
Any reason you don't follow the approach used in patch 7, putting simple
#ifdef in the switch() in vpmu_init()? That would avoid the need for the
three almost identical stubs.
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option
2024-04-23 8:50 ` [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option Sergiy Kibrik
2024-04-26 23:04 ` Stefano Stabellini
@ 2024-04-29 15:34 ` Jan Beulich
2024-04-30 9:42 ` Sergiy Kibrik
1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:34 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
On 23.04.2024 10:50, Sergiy Kibrik wrote:
> Since MCG_LMCE_P bit is specific to Intel CPUs
That's the case now. It could change going forward, and an underlying hypervisor
might also have (obscure?) reasons to surface it elsewhere.
> the code to check it can
> possibly be excluded from build if !CONFIG_INTEL. With these guards
> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
> not being built.
>
> Also replace boilerplate code that checks for MCG_LMCE_P flag with
> vmce_has_lmce(), which might contribute to readability a bit.
Alternatively, have you considered making that function an inline one in a
suitable header? Besides addressing your build issue (I think), ...
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
> * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
> * does not need to check them here.
> */
> - if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )
... doing so would alternatively also permit integrating the IS_ENABLED()
into the function, rather than repeating the same ...
> @@ -324,7 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
> break;
>
> case MSR_IA32_MCG_EXT_CTL:
> - if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
> !(val & ~MCG_EXT_CTL_LMCE_EN) )
> cur->arch.vmce.mcg_ext_ctl = val;
> else
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -86,7 +86,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> goto gp_fault;
>
> *val = IA32_FEATURE_CONTROL_LOCK;
> - if ( vmce_has_lmce(v) )
> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
> *val |= IA32_FEATURE_CONTROL_LMCE_ON;
> if ( cp->basic.vmx )
> *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
... three times.
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs
2024-04-23 8:52 ` [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
2024-04-26 23:08 ` Stefano Stabellini
@ 2024-04-29 15:41 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:41 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
On 23.04.2024 10:52, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -141,12 +141,14 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
> case X86_VENDOR_CENTAUR:
> case X86_VENDOR_SHANGHAI:
> case X86_VENDOR_INTEL:
> - ret = vmce_intel_rdmsr(v, msr, val);
> + ret = IS_ENABLED(CONFIG_INTEL) ?
> + vmce_intel_rdmsr(v, msr, val) : -ENODEV;
> break;
>
> case X86_VENDOR_AMD:
> case X86_VENDOR_HYGON:
> - ret = vmce_amd_rdmsr(v, msr, val);
> + ret = IS_ENABLED(CONFIG_AMD) ?
> + vmce_amd_rdmsr(v, msr, val) : -ENODEV;
> break;
Why -ENODEV when ...
> default:
... below here 0 is put into "ret"? And why not have the default case take
care of unsupported/unrecognized vendors uniformly?
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support
2024-04-23 8:54 ` [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support Sergiy Kibrik
2024-04-26 23:10 ` Stefano Stabellini
@ 2024-04-29 15:42 ` Jan Beulich
2024-05-02 8:56 ` Sergiy Kibrik
1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:42 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
On 23.04.2024 10:54, Sergiy Kibrik wrote:
> Guard access to Intel-specific lmce_support & cmci_support variables in
> common MCE/VMCE code. These are set in Intel-specific parts of mcheck code
> and can potentially be skipped if building for non-intel platform by
> disabling CONFIG_INTEL option.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
See comments given for patch 2.
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls
2024-04-23 8:56 ` [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
2024-04-26 23:12 ` Stefano Stabellini
@ 2024-04-29 15:45 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:45 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
On 23.04.2024 10:56, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -761,7 +761,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
> {
> case X86_VENDOR_AMD:
> case X86_VENDOR_HYGON:
> - inited = amd_mcheck_init(c, bsp);
> + inited = IS_ENABLED(CONFIG_AMD) ? amd_mcheck_init(c, bsp) :
> + mcheck_unset;
> break;
>
> case X86_VENDOR_INTEL:
> @@ -769,7 +770,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool bsp)
> {
> case 6:
> case 15:
> - inited = intel_mcheck_init(c, bsp);
> + inited = IS_ENABLED(CONFIG_INTEL) ? intel_mcheck_init(c, bsp) :
> + mcheck_unset;
> break;
> }
> break;
Same question as on an earlier patch: Why set a value different from
what "default:" below here does (really: keeps)? And why not arrange to
have that "default:" take care of what's build-disabled?
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs()
2024-04-26 23:14 ` Stefano Stabellini
@ 2024-04-29 15:47 ` Jan Beulich
0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:47 UTC (permalink / raw)
To: Sergiy Kibrik
Cc: xen-devel, Andrew Cooper, Roger Pau Monné, Stefano Stabellini
On 27.04.2024 01:14, Stefano Stabellini wrote:
> On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
>> Add check for CONFIG_INTEL build option to conditional call of this routine,
>> so that if Intel support is disabled the call would be eliminated.
>>
>> No functional change intended.
>>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
Aiui this doesn't depend on earlier patches and hence could go in right
away.
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code
2024-04-26 23:16 ` Stefano Stabellini
@ 2024-04-29 15:54 ` Jan Beulich
2024-05-02 8:59 ` Sergiy Kibrik
0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2024-04-29 15:54 UTC (permalink / raw)
To: Sergiy Kibrik, Stefano Stabellini
Cc: xen-devel, Andrew Cooper, Roger Pau Monné
On 27.04.2024 01:16, Stefano Stabellini wrote:
> On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
>> --- a/xen/arch/x86/cpu/mcheck/Makefile
>> +++ b/xen/arch/x86/cpu/mcheck/Makefile
>> @@ -1,12 +1,10 @@
>> -obj-y += amd_nonfatal.o
>> -obj-y += mce_amd.o
>> obj-y += mcaction.o
>> obj-y += barrier.o
>> -obj-y += intel-nonfatal.o
>> obj-y += mctelem.o
>> obj-y += mce.o
>> obj-y += mce-apei.o
>> -obj-y += mce_intel.o
>> +obj-$(CONFIG_AMD) += mce_amd.o amd_nonfatal.o
>> +obj-$(CONFIG_INTEL) += mce_intel.o intel-nonfatal.o
>> obj-y += non-fatal.o
>> obj-y += util.o
>> obj-y += vmce.o
>
> Awesome!
Almost. I'd appreciate if the ordering of files would be retained. It's
not quite alphabetic, but still. Moving mce_amd.o and mcaction.o to their
designated slots may or may not be done right here.
>> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
>> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
>> @@ -24,14 +24,20 @@ static int __init cf_check init_nonfatal_mce_checker(void)
>> * Check for non-fatal errors every MCE_RATE s
>> */
>> switch (c->x86_vendor) {
>> +#ifdef CONFIG_AMD
>> case X86_VENDOR_AMD:
>> case X86_VENDOR_HYGON:
>> /* Assume we are on K8 or newer AMD or Hygon CPU here */
>> amd_nonfatal_mcheck_init(c);
>> break;
>> +#endif
>> +#ifdef CONFIG_INTEL
>> case X86_VENDOR_INTEL:
>> intel_nonfatal_mcheck_init(c);
>> break;
>> +#endif
>> + default:
>> + return -ENODEV;
This, while perhaps desirable, doesn't fit ...
>> }
>> printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
>> return 0;
... earlier behavior, and hence is somewhat unexpected in a change which, by
its description, looks like a "no functional change" one.
> For consistency in all other cases this patch series uses IS_ENABLED
> checks. They could be used here as well.
Hmm, I think for switch() statements like this (see also comments elsewhere
on this series) using #ifdef is overall better.
Jan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code
2024-04-29 15:28 ` Jan Beulich
@ 2024-04-30 9:07 ` Sergiy Kibrik
0 siblings, 0 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-30 9:07 UTC (permalink / raw)
To: Jan Beulich
Cc: Roger Pau Monné, Andrew Cooper, Stefano Stabellini, xen-devel
29.04.24 18:28, Jan Beulich:
> Any reason you don't follow the approach used in patch 7, putting simple
> #ifdef in the switch() in vpmu_init()? That would avoid the need for the
> three almost identical stubs.
I didn't want to put that many preprocessor statements into this small
switch() block (4 #ifdef/#endif-s per 15 LOC) but if it's ok I'll do it.
-Sergiy
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option
2024-04-29 15:34 ` Jan Beulich
@ 2024-04-30 9:42 ` Sergiy Kibrik
0 siblings, 0 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-04-30 9:42 UTC (permalink / raw)
To: Jan Beulich
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
29.04.24 18:34, Jan Beulich:
> On 23.04.2024 10:50, Sergiy Kibrik wrote:
>> Since MCG_LMCE_P bit is specific to Intel CPUs
>
> That's the case now. It could change going forward, and an underlying hypervisor
> might also have (obscure?) reasons to surface it elsewhere.
>
>> the code to check it can
>> possibly be excluded from build if !CONFIG_INTEL. With these guards
>> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
>> not being built.
>>
>> Also replace boilerplate code that checks for MCG_LMCE_P flag with
>> vmce_has_lmce(), which might contribute to readability a bit.
>
> Alternatively, have you considered making that function an inline one in a
> suitable header? Besides addressing your build issue (I think), ...
>
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
>> * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
>> * does not need to check them here.
>> */
>> - if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
>> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )
>
> ... doing so would alternatively also permit integrating the IS_ENABLED()
> into the function, rather than repeating the same ...
>
>> @@ -324,7 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>> break;
>>
>> case MSR_IA32_MCG_EXT_CTL:
>> - if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
>> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
>> !(val & ~MCG_EXT_CTL_LMCE_EN) )
>> cur->arch.vmce.mcg_ext_ctl = val;
>> else
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -86,7 +86,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>> goto gp_fault;
>>
>> *val = IA32_FEATURE_CONTROL_LOCK;
>> - if ( vmce_has_lmce(v) )
>> + if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
>> *val |= IA32_FEATURE_CONTROL_LMCE_ON;
>> if ( cp->basic.vmx )
>> *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
>
> ... three times.
>
I think I'll move vmce_has_lmce() to arch/x86/cpu/mcheck/mce.h then, if
no objections.
-Sergiy
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support
2024-04-29 15:42 ` Jan Beulich
@ 2024-05-02 8:56 ` Sergiy Kibrik
0 siblings, 0 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-05-02 8:56 UTC (permalink / raw)
To: Jan Beulich
Cc: Andrew Cooper, Roger Pau Monné, Stefano Stabellini, xen-devel
29.04.24 18:42, Jan Beulich:
> On 23.04.2024 10:54, Sergiy Kibrik wrote:
>> Guard access to Intel-specific lmce_support & cmci_support variables in
>> common MCE/VMCE code. These are set in Intel-specific parts of mcheck code
>> and can potentially be skipped if building for non-intel platform by
>> disabling CONFIG_INTEL option.
>>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>
> See comments given for patch 2.
>
I'll squash this patch into patch 7 then, as they depend on each other
-Sergiy
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code
2024-04-29 15:54 ` Jan Beulich
@ 2024-05-02 8:59 ` Sergiy Kibrik
0 siblings, 0 replies; 26+ messages in thread
From: Sergiy Kibrik @ 2024-05-02 8:59 UTC (permalink / raw)
To: Jan Beulich, Stefano Stabellini
Cc: xen-devel, Andrew Cooper, Roger Pau Monné
29.04.24 18:54, Jan Beulich:
> On 27.04.2024 01:16, Stefano Stabellini wrote:
>> On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
>>> --- a/xen/arch/x86/cpu/mcheck/Makefile
>>> +++ b/xen/arch/x86/cpu/mcheck/Makefile
>>> @@ -1,12 +1,10 @@
>>> -obj-y += amd_nonfatal.o
>>> -obj-y += mce_amd.o
>>> obj-y += mcaction.o
>>> obj-y += barrier.o
>>> -obj-y += intel-nonfatal.o
>>> obj-y += mctelem.o
>>> obj-y += mce.o
>>> obj-y += mce-apei.o
>>> -obj-y += mce_intel.o
>>> +obj-$(CONFIG_AMD) += mce_amd.o amd_nonfatal.o
>>> +obj-$(CONFIG_INTEL) += mce_intel.o intel-nonfatal.o
>>> obj-y += non-fatal.o
>>> obj-y += util.o
>>> obj-y += vmce.o
>>
>> Awesome!
>
> Almost. I'd appreciate if the ordering of files would be retained. It's
> not quite alphabetic, but still. Moving mce_amd.o and mcaction.o to their
> designated slots may or may not be done right here.
sure, I'll leave ordering as before
>
>>> --- a/xen/arch/x86/cpu/mcheck/non-fatal.c
>>> +++ b/xen/arch/x86/cpu/mcheck/non-fatal.c
>>> @@ -24,14 +24,20 @@ static int __init cf_check init_nonfatal_mce_checker(void)
>>> * Check for non-fatal errors every MCE_RATE s
>>> */
>>> switch (c->x86_vendor) {
>>> +#ifdef CONFIG_AMD
>>> case X86_VENDOR_AMD:
>>> case X86_VENDOR_HYGON:
>>> /* Assume we are on K8 or newer AMD or Hygon CPU here */
>>> amd_nonfatal_mcheck_init(c);
>>> break;
>>> +#endif
>>> +#ifdef CONFIG_INTEL
>>> case X86_VENDOR_INTEL:
>>> intel_nonfatal_mcheck_init(c);
>>> break;
>>> +#endif
>>> + default:
>>> + return -ENODEV;
>
> This, while perhaps desirable, doesn't fit ...
>
>>> }
>>> printk(KERN_INFO "mcheck_poll: Machine check polling timer started.\n");
>>> return 0;
>
> ... earlier behavior, and hence is somewhat unexpected in a change which, by
> its description, looks like a "no functional change" one.
>
I see, will try to describe it a bit better then.
-Sergiy
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2024-05-02 8:59 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-23 8:45 [XEN PATCH v1 0/7] x86: make Intel/AMD vPMU & MCE support configurable Sergiy Kibrik
2024-04-23 8:48 ` [XEN PATCH v1 1/7] x86/vpmu: separate amd/intel vPMU code Sergiy Kibrik
2024-04-26 23:00 ` Stefano Stabellini
2024-04-29 15:28 ` Jan Beulich
2024-04-30 9:07 ` Sergiy Kibrik
2024-04-23 8:50 ` [XEN PATCH v1 2/7] x86/intel: guard vmce_has_lmce() with INTEL option Sergiy Kibrik
2024-04-26 23:04 ` Stefano Stabellini
2024-04-29 15:34 ` Jan Beulich
2024-04-30 9:42 ` Sergiy Kibrik
2024-04-23 8:52 ` [XEN PATCH v1 3/7] x86/MCE: guard access to Intel/AMD-specific MCA MSRs Sergiy Kibrik
2024-04-26 23:08 ` Stefano Stabellini
2024-04-29 15:41 ` Jan Beulich
2024-04-23 8:54 ` [XEN PATCH v1 4/7] x86/MCE: guard lmce_support/cmci_support Sergiy Kibrik
2024-04-26 23:10 ` Stefano Stabellini
2024-04-29 15:42 ` Jan Beulich
2024-05-02 8:56 ` Sergiy Kibrik
2024-04-23 8:56 ` [XEN PATCH v1 5/7] x86/MCE: guard {intel/amd}_mcheck_init() calls Sergiy Kibrik
2024-04-26 23:12 ` Stefano Stabellini
2024-04-29 15:45 ` Jan Beulich
2024-04-23 8:58 ` [XEN PATCH v1 6/7] x86/MCE: guard call to Intel-specific intel_get_extended_msrs() Sergiy Kibrik
2024-04-26 23:14 ` Stefano Stabellini
2024-04-29 15:47 ` Jan Beulich
2024-04-23 9:00 ` [XEN PATCH v1 7/7] x86/MCE: optional build of AMD/Intel MCE code Sergiy Kibrik
2024-04-26 23:16 ` Stefano Stabellini
2024-04-29 15:54 ` Jan Beulich
2024-05-02 8:59 ` Sergiy Kibrik
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).