All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
@ 2022-03-31  9:27 Roger Pau Monne
  2022-03-31  9:27 ` [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Roger Pau Monne @ 2022-03-31  9:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Hello,

The following series implements support for MSR_VIRT_SPEC_CTRL
(VIRT_SSBD) on different AMD CPU families.

Note that the support is added backwards, starting with the newer CPUs
that support MSR_SPEC_CTRL and moving to the older ones either using
MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.

Xen is still free to use it's own SSBD setting, as the selection is
context switched on vm{entry,exit}.

On Zen2 and later, SPEC_CTRL.SSBD exists and should be used in
preference to VIRT_SPEC_CTRL.SSBD.  However, for migration
compatibility, Xen offers VIRT_SSBD to guests (in the max CPUID policy,
not default) implemented in terms of SPEC_CTRL.SSBD.

On Fam15h thru Zen1, Xen exposes VIRT_SSBD to guests by default to
abstract away the model and/or hypervisor specific differences in
MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.

Note that if the hardware itself does offer VIRT_SSBD (ie: very likely
when running virtualized on < Zen2 hardware) and not AMD_SSBD Xen will
allow untrapped access to MSR_VIRT_SPEC_CTRL for HVM guests.

So the implementation of VIRT_SSBD exposed to HVM guests will use one of
the following underlying mechanisms, in the preference order listed
below:

 * SPEC_CTRL.SSBD. (patch 1)
 * VIRT_SPEC_CTRL.SSBD (untrapped). (patch 2).
 * Non-architectural way using LS_CFG. (patch 3)

This has survived a XenRT basic set of tests on AMD machines.

Roger Pau Monne (3):
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD

 xen/arch/x86/cpu/amd.c                      | 116 +++++++++++++++++---
 xen/arch/x86/cpuid.c                        |  28 +++++
 xen/arch/x86/hvm/hvm.c                      |   1 +
 xen/arch/x86/hvm/svm/entry.S                |   6 +
 xen/arch/x86/hvm/svm/svm.c                  |  50 +++++++++
 xen/arch/x86/include/asm/amd.h              |   4 +
 xen/arch/x86/include/asm/cpufeatures.h      |   1 +
 xen/arch/x86/include/asm/msr.h              |  14 +++
 xen/arch/x86/msr.c                          |  26 +++++
 xen/arch/x86/spec_ctrl.c                    |  12 +-
 xen/include/public/arch-x86/cpufeatureset.h |   2 +-
 11 files changed, 241 insertions(+), 19 deletions(-)

-- 
2.35.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-31  9:27 [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-03-31  9:27 ` Roger Pau Monne
  2022-04-21  9:34   ` Jan Beulich
  2022-03-31  9:27 ` [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2022-03-31  9:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Use the logic to set shadow SPEC_CTRL values in order to implement
support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
guests. This includes using the spec_ctrl vCPU MSR variable to store
the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
any SPEC_CTRL values being set by the guest.

On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
policy so it can be enabled for compatibility purposes.

Some reasoning regarding why '!s' is used to annotate the feature:
 * '!': the feature might be exposed to guests even when not present
   on the host hardware.
 * 's': the feature won't be exposed by default.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Reword reasoning for using '!s'.
 - Trim comment about only setting SSBD bit in spec_ctrl.raw.

Changes since v1:
 - Only expose VIRT_SSBD if AMD_SSBD is available on the host.
 - Revert change to msr-sc= command line option documentation.
 - Only set or clear the SSBD bit of spec_ctrl.
---
 xen/arch/x86/cpuid.c                        |  7 +++++++
 xen/arch/x86/hvm/hvm.c                      |  1 +
 xen/arch/x86/include/asm/msr.h              |  4 ++++
 xen/arch/x86/msr.c                          | 18 ++++++++++++++++++
 xen/arch/x86/spec_ctrl.c                    |  3 ++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 6 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index bb554b06a7..4ca77ea870 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
         __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
         __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
     }
+    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        /*
+         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
+         * and implemented using the former. Expose in the max policy only as
+         * the preference is for guests to use SPEC_CTRL.SSBD if available.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * With VT-x, some features are only supported by Xen if dedicated
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 709a4191ef..595858f2a7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1334,6 +1334,7 @@ static const uint32_t msrs_to_send[] = {
     MSR_INTEL_MISC_FEATURES_ENABLES,
     MSR_IA32_BNDCFGS,
     MSR_IA32_XSS,
+    MSR_VIRT_SPEC_CTRL,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ce4fe51afe..ab6fbb5051 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -291,6 +291,7 @@ struct vcpu_msrs
 {
     /*
      * 0x00000048 - MSR_SPEC_CTRL
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if X86_FEATURE_AMD_SSBD)
      *
      * For PV guests, this holds the guest kernel value.  It is accessed on
      * every entry/exit path.
@@ -306,6 +307,9 @@ struct vcpu_msrs
      * We must clear/restore Xen's value before/after VMRUN to avoid unduly
      * influencing the guest.  In order to support "behind the guest's back"
      * protections, we load this value (commonly 0) before VMRUN.
+     *
+     * Once of such "behind the guest's back" usages is setting SPEC_CTRL.SSBD
+     * if the guest sets VIRT_SPEC_CTRL.SSBD.
      */
     struct {
         uint32_t raw;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 01a15857b7..72c175fd8b 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -381,6 +381,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
                ? K8_HWCR_TSC_FREQ_SEL : 0;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
@@ -666,6 +673,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        /* Only supports SSBD bit, the rest are ignored. */
+        if ( val & SPEC_CTRL_SSBD )
+            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        else
+            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         /*
          * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 1408e4c7ab..f338bfe292 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -402,12 +402,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
      * mitigation support for guests.
      */
 #ifdef CONFIG_HVM
-    printk("  Support for HVM VMs:%s%s%s%s%s\n",
+    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
+           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 9cee4b439e..b797c6bea1 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-31  9:27 [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-03-31  9:27 ` [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-03-31  9:27 ` Roger Pau Monne
  2022-04-21  9:39   ` Jan Beulich
  2022-03-31  9:27 ` [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2022-04-22 18:49 ` [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
  3 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2022-03-31  9:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
hardware has support for it. This requires adding logic in the
vm{entry,exit} paths for SVM in order to context switch between the
hypervisor value and the guest one. The added handlers for context
switch will also be used for the legacy SSBD support.

Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
to signal whether VIRT_SPEC_CTRL needs to be handled on guest
vm{entry,exit}.

This patch changes the annotation 's' to 'S' because it introduces
support to expose VIRT_SSBD to guests by default when the host
(virtual) hardware also supports it.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Reword part of the commit message regarding annotation change.
 - Fix MSR intercept.
 - Add handling of VIRT_SPEC_CTRL to guest_{rd,wr}msr when using
   VIRT_SSBD also.

Changes since v1:
 - Introduce virt_spec_ctrl vCPU field.
 - Context switch VIRT_SPEC_CTRL on vmentry/vmexit separately from
   SPEC_CTRL.
---
 xen/arch/x86/cpuid.c                        | 11 ++++++
 xen/arch/x86/hvm/svm/entry.S                |  6 ++++
 xen/arch/x86/hvm/svm/svm.c                  | 40 +++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeatures.h      |  1 +
 xen/arch/x86/include/asm/msr.h              | 10 ++++++
 xen/arch/x86/msr.c                          | 16 ++++++---
 xen/arch/x86/spec_ctrl.c                    |  9 ++++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 8 files changed, 89 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 4ca77ea870..91e53284fc 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -534,6 +534,10 @@ static void __init calculate_hvm_max_policy(void)
          raw_cpuid_policy.basic.sep )
         __set_bit(X86_FEATURE_SEP, hvm_featureset);
 
+    if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+        /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
+        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
      * availability, or admin choice), hide the feature.
@@ -590,6 +594,13 @@ static void __init calculate_hvm_def_policy(void)
     guest_common_feature_adjustments(hvm_featureset);
     guest_common_default_feature_adjustments(hvm_featureset);
 
+    /*
+     * AMD_SSBD if preferred over VIRT_SSBD, so don't expose the later by
+     * default if the former is available.
+     */
+    if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 4ae55a2ef6..e2c104bb1e 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -57,6 +57,9 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
+        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -114,6 +117,9 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         stgi
 GLOBAL(svm_stgi_label)
         mov  %rsp,%rdi
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 64a45045da..40ff28ecf1 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -52,6 +52,7 @@
 #include <asm/hvm/svm/svmdebug.h>
 #include <asm/hvm/svm/nestedsvm.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/spec_ctrl.h>
 #include <asm/x86_emulate.h>
 #include <public/sched.h>
 #include <asm/hvm/vpt.h>
@@ -610,6 +611,15 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     svm_intercept_msr(v, MSR_SPEC_CTRL,
                       cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
 
+    /*
+     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
+     * and the hardware implements it.
+     */
+    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
+                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
+                      !cpu_has_amd_ssbd ?
+                      MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
     svm_intercept_msr(v, MSR_PRED_CMD,
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
@@ -3105,6 +3115,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
+/* Called with GIF=0. */
+void vmexit_virt_spec_ctrl(void)
+{
+    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
+
+    if ( cpu_has_virt_ssbd )
+    {
+        unsigned int lo, hi;
+
+        /*
+         * Need to read from the hardware because VIRT_SPEC_CTRL is not context
+         * switched by the hardware, and we allow the guest untrapped access to
+         * the register.
+         */
+        rdmsr(MSR_VIRT_SPEC_CTRL, lo, hi);
+        if ( val != lo )
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        current->arch.msrs->virt_spec_ctrl.raw = lo;
+    }
+}
+
+/* Called with GIF=0. */
+void vmentry_virt_spec_ctrl(void)
+{
+    unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
+
+    if ( val != (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
+        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index 7413febd7a..2240547b64 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -40,6 +40,7 @@ XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 XEN_CPUFEATURE(XEN_IBT,           X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */
+XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(28)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ab6fbb5051..460aabe84f 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -375,6 +375,16 @@ struct vcpu_msrs
      */
     uint32_t tsc_aux;
 
+    /*
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if !X86_FEATURE_AMD_SSBD)
+     *
+     * AMD only. Guest selected value, saved and restored on guest VM
+     * entry/exit.
+     */
+    struct {
+        uint32_t raw;
+    } virt_spec_ctrl;
+
     /*
      * 0xc00110{27,19-1b} MSR_AMD64_DR{0-3}_ADDRESS_MASK
      *
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 72c175fd8b..a1e268eea9 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -385,7 +385,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         if ( !cp->extd.virt_ssbd )
             goto gp_fault;
 
-        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        if ( cpu_has_amd_ssbd )
+            *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        else
+            *val = msrs->virt_spec_ctrl.raw;
         break;
 
     case MSR_AMD64_DE_CFG:
@@ -678,10 +681,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             goto gp_fault;
 
         /* Only supports SSBD bit, the rest are ignored. */
-        if ( val & SPEC_CTRL_SSBD )
-            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        if ( cpu_has_amd_ssbd )
+        {
+            if ( val & SPEC_CTRL_SSBD )
+                msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+            else
+                msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        }
         else
-            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+            msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
         break;
 
     case MSR_AMD64_DE_CFG:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f338bfe292..0d5ec877d1 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -406,9 +406,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
-           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
+           (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL"
+                                                       : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
@@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
+    /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
+
     /* If we have IBRS available, see whether we should use it. */
     if ( has_spec_ctrl && ibrs )
         default_xen_spec_ctrl |= SPEC_CTRL_IBRS;
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index b797c6bea1..0639b9faf2 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!S MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-31  9:27 [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-03-31  9:27 ` [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
  2022-03-31  9:27 ` [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-03-31  9:27 ` Roger Pau Monne
  2022-04-21  9:50   ` Jan Beulich
  2022-04-22 18:49 ` [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
  3 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2022-03-31  9:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
allows for an unified way of exposing SSBD support to guests on AMD
hardware that's compatible migration wise, regardless of what
underlying mechanism is used to set SSBD.

Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
shared between threads on the same core, so there's extra logic in
order to synchronize the value and have SSBD set as long as one of the
threads in the core requires it to be set. Such logic also requires
extra storage for each thread state, which is allocated at
initialization time.

Do the context switching of the SSBD selection in LS_CFG between
hypervisor and guest in the same handler that's already used to switch
the value of VIRT_SPEC_CTRL in the hardware when supported.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Fix codding style issues.
 - Use AMD_ZEN1_MAX_SOCKETS to define the max number of possible
   sockets in Zen1 systems.

Changes since v1:
 - Report legacy SSBD support using a global variable.
 - Use ro_after_init for ssbd_max_cores.
 - Handle boot_cpu_data.x86_num_siblings < 1.
 - Add comment regarding _irqsave usage in amd_set_legacy_ssbd.
---
 xen/arch/x86/cpu/amd.c         | 116 ++++++++++++++++++++++++++++-----
 xen/arch/x86/cpuid.c           |  10 +++
 xen/arch/x86/hvm/svm/svm.c     |  12 +++-
 xen/arch/x86/include/asm/amd.h |   4 ++
 xen/arch/x86/spec_ctrl.c       |   4 +-
 5 files changed, 127 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 4999f8be2b..a256a9d882 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -48,6 +48,7 @@ boolean_param("allow_unsafe", opt_allow_unsafe);
 
 /* Signal whether the ACPI C1E quirk is required. */
 bool __read_mostly amd_acpi_c1e_quirk;
+bool __ro_after_init amd_legacy_ssbd;
 
 static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo,
 				 unsigned int *hi)
@@ -685,23 +686,10 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
  * Refer to the AMD Speculative Store Bypass whitepaper:
  * https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
  */
-void amd_init_ssbd(const struct cpuinfo_x86 *c)
+static bool set_legacy_ssbd(const struct cpuinfo_x86 *c, bool enable)
 {
 	int bit = -1;
 
-	if (cpu_has_ssb_no)
-		return;
-
-	if (cpu_has_amd_ssbd) {
-		/* Handled by common MSR_SPEC_CTRL logic */
-		return;
-	}
-
-	if (cpu_has_virt_ssbd) {
-		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
-		return;
-	}
-
 	switch (c->x86) {
 	case 0x15: bit = 54; break;
 	case 0x16: bit = 33; break;
@@ -715,20 +703,114 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 		if (rdmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    val &= ~mask;
-			    if (opt_ssbd)
+			    if (enable)
 				    val |= mask;
 			    false;
 		    }) ||
 		    wrmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    rdmsrl(MSR_AMD64_LS_CFG, val);
-			    (val & mask) != (opt_ssbd * mask);
+			    (val & mask) != (enable * mask);
 		    }))
 			bit = -1;
 	}
 
-	if (bit < 0)
+	return bit >= 0;
+}
+
+void amd_init_ssbd(const struct cpuinfo_x86 *c)
+{
+	if (cpu_has_ssb_no)
+		return;
+
+	if (cpu_has_amd_ssbd) {
+		/* Handled by common MSR_SPEC_CTRL logic */
+		return;
+	}
+
+	if (cpu_has_virt_ssbd) {
+		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
+		return;
+	}
+
+	if (!set_legacy_ssbd(c, opt_ssbd)) {
 		printk_once(XENLOG_ERR "No SSBD controls available\n");
+		if (amd_legacy_ssbd)
+			panic("CPU feature mismatch: no legacy SSBD\n");
+	} else if (c == &boot_cpu_data)
+		amd_legacy_ssbd = true;
+}
+
+static struct ssbd_core {
+    spinlock_t lock;
+    unsigned int count;
+} *ssbd_core;
+static unsigned int __ro_after_init ssbd_max_cores;
+#define AMD_ZEN1_MAX_SOCKETS 2
+
+bool __init amd_setup_legacy_ssbd(void)
+{
+	unsigned int i;
+
+	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
+		return true;
+
+	/*
+	 * One could be forgiven for thinking that c->x86_max_cores is the
+	 * correct value to use here.
+	 *
+	 * However, that value is derived from the current configuration, and
+	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
+	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
+	 */
+	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
+		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
+		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
+	}
+	if (!ssbd_max_cores)
+		return false;
+
+	/* Max is two sockets for Fam17h hardware. */
+	ssbd_core = xzalloc_array(struct ssbd_core,
+	                          ssbd_max_cores * AMD_ZEN1_MAX_SOCKETS);
+	if (!ssbd_core)
+		return false;
+
+	for (i = 0; i < ssbd_max_cores * AMD_ZEN1_MAX_SOCKETS; i++) {
+		spin_lock_init(&ssbd_core[i].lock);
+		/* Record initial state, also applies to any hotplug CPU. */
+		if (opt_ssbd)
+			ssbd_core[i].count = boot_cpu_data.x86_num_siblings;
+	}
+
+	return true;
+}
+
+void amd_set_legacy_ssbd(bool enable)
+{
+	const struct cpuinfo_x86 *c = &current_cpu_data;
+	struct ssbd_core *core;
+	unsigned long flags;
+
+	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
+		BUG_ON(!set_legacy_ssbd(c, enable));
+		return;
+	}
+
+	BUG_ON(c->phys_proc_id >= AMD_ZEN1_MAX_SOCKETS);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
+	core = &ssbd_core[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id];
+	/*
+	 * Use irqsave variant to make check_lock() happy. When called from
+	 * vm{exit,entry}_virt_spec_ctrl GIF=0, but the state of IF could be 1,
+	 * thus fooling the spinlock check.
+	 */
+	spin_lock_irqsave(&core->lock, flags);
+	core->count += enable ? 1 : -1;
+	ASSERT(core->count <= c->x86_num_siblings);
+	if (enable ? core->count == 1 : !core->count)
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&core->lock, flags);
 }
 
 void __init detect_zen2_null_seg_behaviour(void)
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 91e53284fc..e278fee689 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -537,6 +537,16 @@ static void __init calculate_hvm_max_policy(void)
     if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
         /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
         __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+    else
+        /*
+         * Expose VIRT_SSBD if VIRT_SPEC_CTRL is supported, as that implies the
+         * underlying hardware is capable of setting SSBD using
+         * non-architectural way or VIRT_SSBD is available.
+         *
+         * Note that if the hardware supports VIRT_SSBD natively this setting
+         * will just override an already set bit.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 40ff28ecf1..9b8f8d21bd 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -3133,7 +3133,12 @@ void vmexit_virt_spec_ctrl(void)
         if ( val != lo )
             wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
         current->arch.msrs->virt_spec_ctrl.raw = lo;
+
+        return;
     }
+
+    if ( val != current->arch.msrs->virt_spec_ctrl.raw )
+        amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
 }
 
 /* Called with GIF=0. */
@@ -3142,7 +3147,12 @@ void vmentry_virt_spec_ctrl(void)
     unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
 
     if ( val != (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+    {
+        if ( cpu_has_virt_ssbd )
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        else
+            amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
+    }
 }
 
 /*
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a82382e6bf..6a42f68542 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -151,4 +151,8 @@ void check_enable_amd_mmconf_dmi(void);
 extern bool amd_acpi_c1e_quirk;
 void amd_check_disable_c1e(unsigned int port, u8 value);
 
+extern bool amd_legacy_ssbd;
+bool amd_setup_legacy_ssbd(void);
+void amd_set_legacy_ssbd(bool enable);
+
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 0d5ec877d1..495e6f9405 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -22,6 +22,7 @@
 #include <xen/param.h>
 #include <xen/warning.h>
 
+#include <asm/amd.h>
 #include <asm/hvm/svm/svm.h>
 #include <asm/microcode.h>
 #include <asm/msr.h>
@@ -1073,7 +1074,8 @@ void __init init_speculation_mitigations(void)
     }
 
     /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
-    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd &&
+         (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) )
         setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
 
     /* If we have IBRS available, see whether we should use it. */
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-31  9:27 ` [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-04-21  9:34   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2022-04-21  9:34 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 31.03.2022 11:27, Roger Pau Monne wrote:
> Use the logic to set shadow SPEC_CTRL values in order to implement
> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> guests. This includes using the spec_ctrl vCPU MSR variable to store
> the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
> any SPEC_CTRL values being set by the guest.
> 
> On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
> default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
> policy so it can be enabled for compatibility purposes.
> 
> Some reasoning regarding why '!s' is used to annotate the feature:
>  * '!': the feature might be exposed to guests even when not present
>    on the host hardware.

As mentioned before I don't think ! is to be used for this particular
purpose. But this needs input from Andrew. Hence ...

>  * 's': the feature won't be exposed by default.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

... only tentatively:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-31  9:27 ` [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-04-21  9:39   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2022-04-21  9:39 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 31.03.2022 11:27, Roger Pau Monne wrote:
> Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
> hardware has support for it. This requires adding logic in the
> vm{entry,exit} paths for SVM in order to context switch between the
> hypervisor value and the guest one. The added handlers for context
> switch will also be used for the legacy SSBD support.
> 
> Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
> to signal whether VIRT_SPEC_CTRL needs to be handled on guest
> vm{entry,exit}.
> 
> This patch changes the annotation 's' to 'S' because it introduces
> support to expose VIRT_SSBD to guests by default when the host
> (virtual) hardware also supports it.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-31  9:27 ` [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-04-21  9:50   ` Jan Beulich
  2022-04-21 15:21     ` Roger Pau Monné
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-04-21  9:50 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 31.03.2022 11:27, Roger Pau Monne wrote:
> Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
> the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
> families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
> allows for an unified way of exposing SSBD support to guests on AMD
> hardware that's compatible migration wise, regardless of what
> underlying mechanism is used to set SSBD.
> 
> Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
> shared between threads on the same core, so there's extra logic in
> order to synchronize the value and have SSBD set as long as one of the
> threads in the core requires it to be set. Such logic also requires
> extra storage for each thread state, which is allocated at
> initialization time.

So where exactly is the boundary? If I'm not mistaken Zen2 is also
Fam17 (and only Zen3 is Fam19), yet here and elsewhere you look to
take Zen1 == Fam17.

Just one further nit on the code:

> +static struct ssbd_core {
> +    spinlock_t lock;
> +    unsigned int count;
> +} *ssbd_core;
> +static unsigned int __ro_after_init ssbd_max_cores;
> +#define AMD_ZEN1_MAX_SOCKETS 2

This #define looks to render ...

> +bool __init amd_setup_legacy_ssbd(void)
> +{
> +	unsigned int i;
> +
> +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
> +		return true;
> +
> +	/*
> +	 * One could be forgiven for thinking that c->x86_max_cores is the
> +	 * correct value to use here.
> +	 *
> +	 * However, that value is derived from the current configuration, and
> +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> +	 */
> +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> +	}
> +	if (!ssbd_max_cores)
> +		return false;
> +
> +	/* Max is two sockets for Fam17h hardware. */
> +	ssbd_core = xzalloc_array(struct ssbd_core,
> +	                          ssbd_max_cores * AMD_ZEN1_MAX_SOCKETS);

... largely redundant the comment here; if anywhere I think the comment
would want to go next to the #define.

Jan



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-21  9:50   ` Jan Beulich
@ 2022-04-21 15:21     ` Roger Pau Monné
  2022-04-21 15:27       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monné @ 2022-04-21 15:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Thu, Apr 21, 2022 at 11:50:16AM +0200, Jan Beulich wrote:
> On 31.03.2022 11:27, Roger Pau Monne wrote:
> > Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
> > the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
> > families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
> > allows for an unified way of exposing SSBD support to guests on AMD
> > hardware that's compatible migration wise, regardless of what
> > underlying mechanism is used to set SSBD.
> > 
> > Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
> > shared between threads on the same core, so there's extra logic in
> > order to synchronize the value and have SSBD set as long as one of the
> > threads in the core requires it to be set. Such logic also requires
> > extra storage for each thread state, which is allocated at
> > initialization time.
> 
> So where exactly is the boundary? If I'm not mistaken Zen2 is also
> Fam17 (and only Zen3 is Fam19), yet here and elsewhere you look to
> take Zen1 == Fam17.

Right, but Zen2 already has AMD_SSBD (ie: SPEC_CTRL), so it's not
using this logic.

The AMD whitepaper is more clear about this: any Fam17h processor that
is using the non-architectural MSRs to set SSBD and has more than 1
logical processor for each logical core must synchronize the setting
of SSBD.

I think just dropping the mention of Zen 1 in the commit message
should remove your concerns?

> Just one further nit on the code:
> 
> > +static struct ssbd_core {
> > +    spinlock_t lock;
> > +    unsigned int count;
> > +} *ssbd_core;
> > +static unsigned int __ro_after_init ssbd_max_cores;
> > +#define AMD_ZEN1_MAX_SOCKETS 2
> 
> This #define looks to render ...
> 
> > +bool __init amd_setup_legacy_ssbd(void)
> > +{
> > +	unsigned int i;
> > +
> > +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
> > +		return true;
> > +
> > +	/*
> > +	 * One could be forgiven for thinking that c->x86_max_cores is the
> > +	 * correct value to use here.
> > +	 *
> > +	 * However, that value is derived from the current configuration, and
> > +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> > +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> > +	 */
> > +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> > +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> > +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> > +	}
> > +	if (!ssbd_max_cores)
> > +		return false;
> > +
> > +	/* Max is two sockets for Fam17h hardware. */
> > +	ssbd_core = xzalloc_array(struct ssbd_core,
> > +	                          ssbd_max_cores * AMD_ZEN1_MAX_SOCKETS);
> 
> ... largely redundant the comment here; if anywhere I think the comment
> would want to go next to the #define.

I guess I should also rename the define to FAM17H instead of ZEN1, as
all Fam17h is limited to two sockets, then the comment can be removed
as I think the define is self-explanatory.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-21 15:21     ` Roger Pau Monné
@ 2022-04-21 15:27       ` Jan Beulich
  2022-04-22  9:55         ` Roger Pau Monné
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-04-21 15:27 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 21.04.2022 17:21, Roger Pau Monné wrote:
> On Thu, Apr 21, 2022 at 11:50:16AM +0200, Jan Beulich wrote:
>> On 31.03.2022 11:27, Roger Pau Monne wrote:
>>> Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
>>> the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
>>> families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
>>> allows for an unified way of exposing SSBD support to guests on AMD
>>> hardware that's compatible migration wise, regardless of what
>>> underlying mechanism is used to set SSBD.
>>>
>>> Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
>>> shared between threads on the same core, so there's extra logic in
>>> order to synchronize the value and have SSBD set as long as one of the
>>> threads in the core requires it to be set. Such logic also requires
>>> extra storage for each thread state, which is allocated at
>>> initialization time.
>>
>> So where exactly is the boundary? If I'm not mistaken Zen2 is also
>> Fam17 (and only Zen3 is Fam19), yet here and elsewhere you look to
>> take Zen1 == Fam17.
> 
> Right, but Zen2 already has AMD_SSBD (ie: SPEC_CTRL), so it's not
> using this logic.
> 
> The AMD whitepaper is more clear about this: any Fam17h processor that
> is using the non-architectural MSRs to set SSBD and has more than 1
> logical processor for each logical core must synchronize the setting
> of SSBD.
> 
> I think just dropping the mention of Zen 1 in the commit message
> should remove your concerns?

Or keep it, but qualify it by saying that Zen2 isn't expected to take
this path because of having SSBD. But iirc SSBD was introduced to
Zen2 only by a ucode update, so such a description should not be wrong
wrt such not-up-to-date systems.

>> Just one further nit on the code:
>>
>>> +static struct ssbd_core {
>>> +    spinlock_t lock;
>>> +    unsigned int count;
>>> +} *ssbd_core;
>>> +static unsigned int __ro_after_init ssbd_max_cores;
>>> +#define AMD_ZEN1_MAX_SOCKETS 2
>>
>> This #define looks to render ...
>>
>>> +bool __init amd_setup_legacy_ssbd(void)
>>> +{
>>> +	unsigned int i;
>>> +
>>> +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
>>> +		return true;
>>> +
>>> +	/*
>>> +	 * One could be forgiven for thinking that c->x86_max_cores is the
>>> +	 * correct value to use here.
>>> +	 *
>>> +	 * However, that value is derived from the current configuration, and
>>> +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
>>> +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
>>> +	 */
>>> +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
>>> +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
>>> +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
>>> +	}
>>> +	if (!ssbd_max_cores)
>>> +		return false;
>>> +
>>> +	/* Max is two sockets for Fam17h hardware. */
>>> +	ssbd_core = xzalloc_array(struct ssbd_core,
>>> +	                          ssbd_max_cores * AMD_ZEN1_MAX_SOCKETS);
>>
>> ... largely redundant the comment here; if anywhere I think the comment
>> would want to go next to the #define.
> 
> I guess I should also rename the define to FAM17H instead of ZEN1, as
> all Fam17h is limited to two sockets, then the comment can be removed
> as I think the define is self-explanatory.

Yes, this rename would help both eliminate my confusion as well as the
need for the comment.

Jan



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-21 15:27       ` Jan Beulich
@ 2022-04-22  9:55         ` Roger Pau Monné
  2022-04-22 10:01           ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monné @ 2022-04-22  9:55 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Thu, Apr 21, 2022 at 05:27:18PM +0200, Jan Beulich wrote:
> On 21.04.2022 17:21, Roger Pau Monné wrote:
> > On Thu, Apr 21, 2022 at 11:50:16AM +0200, Jan Beulich wrote:
> >> On 31.03.2022 11:27, Roger Pau Monne wrote:
> >>> Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
> >>> the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
> >>> families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
> >>> allows for an unified way of exposing SSBD support to guests on AMD
> >>> hardware that's compatible migration wise, regardless of what
> >>> underlying mechanism is used to set SSBD.
> >>>
> >>> Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
> >>> shared between threads on the same core, so there's extra logic in
> >>> order to synchronize the value and have SSBD set as long as one of the
> >>> threads in the core requires it to be set. Such logic also requires
> >>> extra storage for each thread state, which is allocated at
> >>> initialization time.
> >>
> >> So where exactly is the boundary? If I'm not mistaken Zen2 is also
> >> Fam17 (and only Zen3 is Fam19), yet here and elsewhere you look to
> >> take Zen1 == Fam17.
> > 
> > Right, but Zen2 already has AMD_SSBD (ie: SPEC_CTRL), so it's not
> > using this logic.
> > 
> > The AMD whitepaper is more clear about this: any Fam17h processor that
> > is using the non-architectural MSRs to set SSBD and has more than 1
> > logical processor for each logical core must synchronize the setting
> > of SSBD.
> > 
> > I think just dropping the mention of Zen 1 in the commit message
> > should remove your concerns?
> 
> Or keep it, but qualify it by saying that Zen2 isn't expected to take
> this path because of having SSBD. But iirc SSBD was introduced to
> Zen2 only by a ucode update, so such a description should not be wrong
> wrt such not-up-to-date systems.

FTAOD I've worded this as:

"Note that on AMD Family 17h and Hygon Family 18h processors the value
of SSBD in LS_CFG is shared between threads on the same core, so
there's extra logic in order to synchronize the value and have SSBD
set as long as one of the threads in the core requires it to be set.
Such logic also requires extra storage for each thread state, which is
allocated at initialization time."

Which I think is correct in all cases.  Iff Zen2 was to resort to
using the non-architectural way of setting SSBD (if that's even
possible) it should synchronize it between threads according to my
read of the AMD whitepaper.

I've also added handling for Hygon Fam18h, seeing as those also make
use of the non-architectural way of setting SSBD.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-22  9:55         ` Roger Pau Monné
@ 2022-04-22 10:01           ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2022-04-22 10:01 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 22.04.2022 11:55, Roger Pau Monné wrote:
> On Thu, Apr 21, 2022 at 05:27:18PM +0200, Jan Beulich wrote:
>> On 21.04.2022 17:21, Roger Pau Monné wrote:
>>> On Thu, Apr 21, 2022 at 11:50:16AM +0200, Jan Beulich wrote:
>>>> On 31.03.2022 11:27, Roger Pau Monne wrote:
>>>>> Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
>>>>> the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
>>>>> families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
>>>>> allows for an unified way of exposing SSBD support to guests on AMD
>>>>> hardware that's compatible migration wise, regardless of what
>>>>> underlying mechanism is used to set SSBD.
>>>>>
>>>>> Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
>>>>> shared between threads on the same core, so there's extra logic in
>>>>> order to synchronize the value and have SSBD set as long as one of the
>>>>> threads in the core requires it to be set. Such logic also requires
>>>>> extra storage for each thread state, which is allocated at
>>>>> initialization time.
>>>>
>>>> So where exactly is the boundary? If I'm not mistaken Zen2 is also
>>>> Fam17 (and only Zen3 is Fam19), yet here and elsewhere you look to
>>>> take Zen1 == Fam17.
>>>
>>> Right, but Zen2 already has AMD_SSBD (ie: SPEC_CTRL), so it's not
>>> using this logic.
>>>
>>> The AMD whitepaper is more clear about this: any Fam17h processor that
>>> is using the non-architectural MSRs to set SSBD and has more than 1
>>> logical processor for each logical core must synchronize the setting
>>> of SSBD.
>>>
>>> I think just dropping the mention of Zen 1 in the commit message
>>> should remove your concerns?
>>
>> Or keep it, but qualify it by saying that Zen2 isn't expected to take
>> this path because of having SSBD. But iirc SSBD was introduced to
>> Zen2 only by a ucode update, so such a description should not be wrong
>> wrt such not-up-to-date systems.
> 
> FTAOD I've worded this as:
> 
> "Note that on AMD Family 17h and Hygon Family 18h processors the value
> of SSBD in LS_CFG is shared between threads on the same core, so
> there's extra logic in order to synchronize the value and have SSBD
> set as long as one of the threads in the core requires it to be set.
> Such logic also requires extra storage for each thread state, which is
> allocated at initialization time."

Thanks.

> Which I think is correct in all cases.  Iff Zen2 was to resort to
> using the non-architectural way of setting SSBD (if that's even
> possible) it should synchronize it between threads according to my
> read of the AMD whitepaper.
> 
> I've also added handling for Hygon Fam18h, seeing as those also make
> use of the non-architectural way of setting SSBD.

Right, better be on the safe side.

Jan



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
  2022-03-31  9:27 [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
                   ` (2 preceding siblings ...)
  2022-03-31  9:27 ` [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-04-22 18:49 ` Andrew Cooper
  2022-04-25 11:28   ` Roger Pau Monné
  3 siblings, 1 reply; 13+ messages in thread
From: Andrew Cooper @ 2022-04-22 18:49 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jan Beulich, Wei Liu

On 31/03/2022 10:27, Roger Pau Monne wrote:
> Hello,
>
> The following series implements support for MSR_VIRT_SPEC_CTRL
> (VIRT_SSBD) on different AMD CPU families.
>
> Note that the support is added backwards, starting with the newer CPUs
> that support MSR_SPEC_CTRL and moving to the older ones either using
> MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.
>
> Xen is still free to use it's own SSBD setting, as the selection is
> context switched on vm{entry,exit}.
>
> On Zen2 and later, SPEC_CTRL.SSBD exists and should be used in
> preference to VIRT_SPEC_CTRL.SSBD.  However, for migration
> compatibility, Xen offers VIRT_SSBD to guests (in the max CPUID policy,
> not default) implemented in terms of SPEC_CTRL.SSBD.
>
> On Fam15h thru Zen1, Xen exposes VIRT_SSBD to guests by default to
> abstract away the model and/or hypervisor specific differences in
> MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.
>
> Note that if the hardware itself does offer VIRT_SSBD (ie: very likely
> when running virtualized on < Zen2 hardware) and not AMD_SSBD Xen will
> allow untrapped access to MSR_VIRT_SPEC_CTRL for HVM guests.
>
> So the implementation of VIRT_SSBD exposed to HVM guests will use one of
> the following underlying mechanisms, in the preference order listed
> below:
>
>  * SPEC_CTRL.SSBD. (patch 1)
>  * VIRT_SPEC_CTRL.SSBD (untrapped). (patch 2).
>  * Non-architectural way using LS_CFG. (patch 3)
>
> This has survived a XenRT basic set of tests on AMD machines.

Sorry for the mixed feedback, but some is applicable across multiple
patches.

First, it is important to know why MSR_VIRT_SPEC_CTRL exists, because
that informs what is, and is not, sensible to do with it.

It exists to be a FMS-invariant abstraction of the DE_CFG interface,
emulated by the hypervisor.  At the time, we experimented with emulating
MSR_SPEC_CTRL directly, but the results were unusable slow (legacy IBRS
causing a vmexit on every syscall/interrupt entry&exit) so
MSR_VIRT_SPEC_CTRL is also an explicit statement that it is an expensive
operation and shouldn't be used frequently.

In practice, this means "only for very very important processes, and not
to be used more frequently than process context switch".  Also, there is
no hardware which implements MSR_VIRT_SPEC_CTRL, nor will there be.

Patch 2 has added an extra two vmexits around each vmexit, in an effort
to let L2 vmexit to L0 rather than L1 for what is likely to be 0 times
in an L1 timeslice.  It's not a credible optimisation, for something
which isn't a production usecase.  Yes - nested virt does exist, and is
useful for dev, but noone runs a fully fat server virt hypervisor at L1
in production if they care in the slightest about performance.  Either
way, patch 2 is premature optimisation with a massive complexity cost.

Furthermore, writes to LS_CFG are also incredibly expensive, even if
you're not changing any bits.  The AMD recommended algorithm
specifically avoids rewriting it with the same value as before.

Another thing is that Xen shouldn't touch LS_CFG like this if there is
any hint of a hypervisor on the system.  If there is a hypervisor and it
doesn't offer VIRT_SPEC_CTRL, trying to play with LS_CFG isn't going to
make the situation any better.

As to the CPUID bit handling, on consideration of the whole series, it
wants to be "!" only.  ! is there to indicate "something complicated is
going on with this bit", and life is too short to try and get the
derivation logic right with both implicit and explicit conditions. 
Leave it without an s/S (so no auto propagation from the host policy),
and set it in the max policy for LS_CFG || VIRT_SPEC_CTRL || SPEC_CTRL,
and set it in the default policy for LS_CFG || VIRT_SPEC_CTRL, which
will be far clearer to follow.

For `struct ssbd_core`, the name isn't great.  It's more
ssbd_ls_cfg/state.  Also, each array element wants 64 byte alignment,
because that's the only way to avoid atomic cacheline pingpong from the
spinlocks.  Also, the accessors need to be raw, because GIF=0 context is
weird and working around checklock with irqsave variants is not a clever
move.  It is not safe to printk()/bug/etc from GIF=0 context, so logic
needs to be kept to an absolute bare minimum.

~Andrew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
  2022-04-22 18:49 ` [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
@ 2022-04-25 11:28   ` Roger Pau Monné
  0 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monné @ 2022-04-25 11:28 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jan Beulich, Wei Liu

On Fri, Apr 22, 2022 at 06:49:57PM +0000, Andrew Cooper wrote:
> On 31/03/2022 10:27, Roger Pau Monne wrote:
> > Hello,
> >
> > The following series implements support for MSR_VIRT_SPEC_CTRL
> > (VIRT_SSBD) on different AMD CPU families.
> >
> > Note that the support is added backwards, starting with the newer CPUs
> > that support MSR_SPEC_CTRL and moving to the older ones either using
> > MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.
> >
> > Xen is still free to use it's own SSBD setting, as the selection is
> > context switched on vm{entry,exit}.
> >
> > On Zen2 and later, SPEC_CTRL.SSBD exists and should be used in
> > preference to VIRT_SPEC_CTRL.SSBD.  However, for migration
> > compatibility, Xen offers VIRT_SSBD to guests (in the max CPUID policy,
> > not default) implemented in terms of SPEC_CTRL.SSBD.
> >
> > On Fam15h thru Zen1, Xen exposes VIRT_SSBD to guests by default to
> > abstract away the model and/or hypervisor specific differences in
> > MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.
> >
> > Note that if the hardware itself does offer VIRT_SSBD (ie: very likely
> > when running virtualized on < Zen2 hardware) and not AMD_SSBD Xen will
> > allow untrapped access to MSR_VIRT_SPEC_CTRL for HVM guests.
> >
> > So the implementation of VIRT_SSBD exposed to HVM guests will use one of
> > the following underlying mechanisms, in the preference order listed
> > below:
> >
> >  * SPEC_CTRL.SSBD. (patch 1)
> >  * VIRT_SPEC_CTRL.SSBD (untrapped). (patch 2).
> >  * Non-architectural way using LS_CFG. (patch 3)
> >
> > This has survived a XenRT basic set of tests on AMD machines.
> 
> Sorry for the mixed feedback, but some is applicable across multiple
> patches.
> 
> First, it is important to know why MSR_VIRT_SPEC_CTRL exists, because
> that informs what is, and is not, sensible to do with it.
> 
> It exists to be a FMS-invariant abstraction of the DE_CFG interface,
> emulated by the hypervisor.  At the time, we experimented with emulating
> MSR_SPEC_CTRL directly, but the results were unusable slow (legacy IBRS
> causing a vmexit on every syscall/interrupt entry&exit) so
> MSR_VIRT_SPEC_CTRL is also an explicit statement that it is an expensive
> operation and shouldn't be used frequently.
> 
> In practice, this means "only for very very important processes, and not
> to be used more frequently than process context switch".  Also, there is
> no hardware which implements MSR_VIRT_SPEC_CTRL, nor will there be.
> 
> Patch 2 has added an extra two vmexits around each vmexit, in an effort

No, it adds just one extra unconditional vmexit, the wrmsr is
conditional to the value in the guest and Xen differing, and it's not
unconditionally executed.

> to let L2 vmexit to L0 rather than L1 for what is likely to be 0 times
> in an L1 timeslice.  It's not a credible optimisation, for something
> which isn't a production usecase.  Yes - nested virt does exist, and is
> useful for dev, but noone runs a fully fat server virt hypervisor at L1
> in production if they care in the slightest about performance.  Either
> way, patch 2 is premature optimisation with a massive complexity cost.

OK, so your recommendation is to trap writes to VIRT_SPEC_CTRL so the
unconditional rdmsr on vmexit is avoided at the cost of taking a
vmexit on all writes to VIRT_SPEC_CTRL, that's indeed fine.

> 
> Furthermore, writes to LS_CFG are also incredibly expensive, even if
> you're not changing any bits.  The AMD recommended algorithm
> specifically avoids rewriting it with the same value as before.

Writes to LS_CFG will only happen if the guest and Xen selection of
SSBD differ, as the guest value is cached.

> Another thing is that Xen shouldn't touch LS_CFG like this if there is
> any hint of a hypervisor on the system.  If there is a hypervisor and it
> doesn't offer VIRT_SPEC_CTRL, trying to play with LS_CFG isn't going to
> make the situation any better.

I would assume that no hypervisor will offer LS_CFG support for
setting SSBD if the guest shouldn't use it, and hence set_legacy_ssbd
will already return false.

> As to the CPUID bit handling, on consideration of the whole series, it
> wants to be "!" only.  ! is there to indicate "something complicated is
> going on with this bit", and life is too short to try and get the
> derivation logic right with both implicit and explicit conditions. 
> Leave it without an s/S (so no auto propagation from the host policy),
> and set it in the max policy for LS_CFG || VIRT_SPEC_CTRL || SPEC_CTRL,
> and set it in the default policy for LS_CFG || VIRT_SPEC_CTRL, which
> will be far clearer to follow.

OK, as both Jan and you agree that using just '!' is fine I can work
with that.

> For `struct ssbd_core`, the name isn't great.  It's more
> ssbd_ls_cfg/state.  Also, each array element wants 64 byte alignment,
> because that's the only way to avoid atomic cacheline pingpong from the
> spinlocks.  Also, the accessors need to be raw, because GIF=0 context is
> weird and working around checklock with irqsave variants is not a clever
> move.  It is not safe to printk()/bug/etc from GIF=0 context, so logic
> needs to be kept to an absolute bare minimum.

Sorry I'm a bit dense today, but I don't seem to be able to find any
raw accessors for our spinlock implementation. _spin_lock_cb will
unconditionally call check_lock and there doesn't seem to be a way to
bypass the checks if lock debugging is enabled.  Am I missing
something?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-04-25 11:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31  9:27 [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-03-31  9:27 ` [PATCH v3 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
2022-04-21  9:34   ` Jan Beulich
2022-03-31  9:27 ` [PATCH v3 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-04-21  9:39   ` Jan Beulich
2022-03-31  9:27 ` [PATCH v3 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
2022-04-21  9:50   ` Jan Beulich
2022-04-21 15:21     ` Roger Pau Monné
2022-04-21 15:27       ` Jan Beulich
2022-04-22  9:55         ` Roger Pau Monné
2022-04-22 10:01           ` Jan Beulich
2022-04-22 18:49 ` [PATCH v3 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
2022-04-25 11:28   ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.