All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
@ 2022-03-15 14:18 Roger Pau Monne
  2022-03-15 14:18 ` [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Roger Pau Monne @ 2022-03-15 14:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Hello,

The following series implements support for MSR_VIRT_SPEC_CTRL
(VIRT_SSBD) on different AMD CPU families.

Note that the support is added backwards, starting with the newer CPUs
that support MSR_SPEC_CTRL and moving to the older ones either using
MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.

Xen is still free to use it's own SSBD setting, as the selection is
context switched on vm{entry,exit}.

On Zen2 and later, SPEC_CTRL.SSBD exists and should be used in
preference to VIRT_SPEC_CTRL.SSBD.  However, for migration
compatibility, Xen offers VIRT_SSBD to guests (in the max CPUID policy,
not default) implemented in terms of SPEC_CTRL.SSBD.

On Fam15h thru Zen1, Xen exposes VIRT_SSBD to guests by default to
abstract away the model and/or hypervisor specific differences in
MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.

Note that if the hardware itself does offer VIRT_SSBD (ie: very likely
when running virtualized on < Zen2 hardware) and not AMD_SSBD Xen will
allow untrapped access to MSR_VIRT_SPEC_CTRL for HVM guests.

So the implementation of VIRT_SSBD exposed to HVM guests will use one of
the following underlying mechanisms, in the preference order listed
below:

 * SPEC_CTRL.SSBD. (patch 1)
 * VIRT_SPEC_CTRL.SSBD (untrapped). (patch 2).
 * Non-architectural way using LS_CFG. (patch 3)

Thanks, Roger.

Roger Pau Monne (3):
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD

 xen/arch/x86/cpu/amd.c                      | 116 +++++++++++++++++---
 xen/arch/x86/cpuid.c                        |  28 +++++
 xen/arch/x86/hvm/hvm.c                      |   1 +
 xen/arch/x86/hvm/svm/entry.S                |   6 +
 xen/arch/x86/hvm/svm/svm.c                  |  49 +++++++++
 xen/arch/x86/include/asm/amd.h              |   4 +
 xen/arch/x86/include/asm/cpufeatures.h      |   1 +
 xen/arch/x86/include/asm/msr.h              |  14 +++
 xen/arch/x86/msr.c                          |  27 +++++
 xen/arch/x86/spec_ctrl.c                    |  12 +-
 xen/include/public/arch-x86/cpufeatureset.h |   2 +-
 11 files changed, 241 insertions(+), 19 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-15 14:18 [PATCH v2 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-03-15 14:18 ` Roger Pau Monne
  2022-03-28 13:40   ` Jan Beulich
  2022-03-15 14:18 ` [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-03-15 14:18 ` [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-03-15 14:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Use the logic to set shadow SPEC_CTRL values in order to implement
support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
guests. This includes using the spec_ctrl vCPU MSR variable to store
the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
any SPEC_CTRL values being set by the guest.

On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
policy so it can be enabled for compatibility purposes.

Note that '!' is used in order to tag the VIRT_SSBD feature as
specially handled. It's possible for the feature to be available to
guests on hardware that doesn't support it natively, for example when
implemented as done by this patch on top of SPEC_CTRL.SSBD (AMD_SSBD).

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Only expose VIRT_SSBD if AMD_SSBD is available on the host.
 - Revert change to msr-sc= command line option documentation.
 - Only set or clear the SSBD bit of spec_ctrl.
---
 xen/arch/x86/cpuid.c                        |  7 +++++++
 xen/arch/x86/hvm/hvm.c                      |  1 +
 xen/arch/x86/include/asm/msr.h              |  4 ++++
 xen/arch/x86/msr.c                          | 21 +++++++++++++++++++++
 xen/arch/x86/spec_ctrl.c                    |  3 ++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 6 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index bb554b06a7..4ca77ea870 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
         __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
         __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
     }
+    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        /*
+         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
+         * and implemented using the former. Expose in the max policy only as
+         * the preference is for guests to use SPEC_CTRL.SSBD if available.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * With VT-x, some features are only supported by Xen if dedicated
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 709a4191ef..595858f2a7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1334,6 +1334,7 @@ static const uint32_t msrs_to_send[] = {
     MSR_INTEL_MISC_FEATURES_ENABLES,
     MSR_IA32_BNDCFGS,
     MSR_IA32_XSS,
+    MSR_VIRT_SPEC_CTRL,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ce4fe51afe..ab6fbb5051 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -291,6 +291,7 @@ struct vcpu_msrs
 {
     /*
      * 0x00000048 - MSR_SPEC_CTRL
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if X86_FEATURE_AMD_SSBD)
      *
      * For PV guests, this holds the guest kernel value.  It is accessed on
      * every entry/exit path.
@@ -306,6 +307,9 @@ struct vcpu_msrs
      * We must clear/restore Xen's value before/after VMRUN to avoid unduly
      * influencing the guest.  In order to support "behind the guest's back"
      * protections, we load this value (commonly 0) before VMRUN.
+     *
+     * Once of such "behind the guest's back" usages is setting SPEC_CTRL.SSBD
+     * if the guest sets VIRT_SPEC_CTRL.SSBD.
      */
     struct {
         uint32_t raw;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 01a15857b7..b212acf29d 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -381,6 +381,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
                ? K8_HWCR_TSC_FREQ_SEL : 0;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
@@ -666,6 +673,20 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        /*
+         * Only supports SSBD bit, the rest are ignored. Only modify the SSBD
+         * bit in case other bits are set.
+         */
+        if ( val & SPEC_CTRL_SSBD )
+            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        else
+            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         /*
          * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 1408e4c7ab..f338bfe292 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -402,12 +402,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
      * mitigation support for guests.
      */
 #ifdef CONFIG_HVM
-    printk("  Support for HVM VMs:%s%s%s%s%s\n",
+    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
+           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 9cee4b439e..b797c6bea1 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-15 14:18 [PATCH v2 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-03-15 14:18 ` [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-03-15 14:18 ` Roger Pau Monne
  2022-03-28 14:02   ` Jan Beulich
  2022-03-15 14:18 ` [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-03-15 14:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
hardware has support for it. This requires adding logic in the
vm{entry,exit} paths for SVM in order to context switch between the
hypervisor value and the guest one. The added handlers for context
switch will also be used for the legacy SSBD support.

Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
to signal whether VIRT_SPEC_CTRL needs to be handled on guest
vm{entry,exit}.

Note the change in the handling of VIRT_SSBD in the featureset
description. The change from 's' to 'S' is due to the fact that now if
VIRT_SSBD is exposed by the hardware it can be passed through to HVM
guests.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Introduce virt_spec_ctrl vCPU field.
 - Context switch VIRT_SPEC_CTRL on vmentry/vmexit separately from
   SPEC_CTRL.
---
 xen/arch/x86/cpuid.c                        | 11 ++++++
 xen/arch/x86/hvm/svm/entry.S                |  6 ++++
 xen/arch/x86/hvm/svm/svm.c                  | 39 +++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeatures.h      |  1 +
 xen/arch/x86/include/asm/msr.h              | 10 ++++++
 xen/arch/x86/spec_ctrl.c                    |  9 ++++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 7 files changed, 76 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 4ca77ea870..91e53284fc 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -534,6 +534,10 @@ static void __init calculate_hvm_max_policy(void)
          raw_cpuid_policy.basic.sep )
         __set_bit(X86_FEATURE_SEP, hvm_featureset);
 
+    if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+        /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
+        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
      * availability, or admin choice), hide the feature.
@@ -590,6 +594,13 @@ static void __init calculate_hvm_def_policy(void)
     guest_common_feature_adjustments(hvm_featureset);
     guest_common_default_feature_adjustments(hvm_featureset);
 
+    /*
+     * AMD_SSBD if preferred over VIRT_SSBD, so don't expose the later by
+     * default if the former is available.
+     */
+    if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 4ae55a2ef6..e2c104bb1e 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -57,6 +57,9 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
+        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -114,6 +117,9 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         stgi
 GLOBAL(svm_stgi_label)
         mov  %rsp,%rdi
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 64a45045da..73a0af599b 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -52,6 +52,7 @@
 #include <asm/hvm/svm/svmdebug.h>
 #include <asm/hvm/svm/nestedsvm.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/spec_ctrl.h>
 #include <asm/x86_emulate.h>
 #include <public/sched.h>
 #include <asm/hvm/vpt.h>
@@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     svm_intercept_msr(v, MSR_SPEC_CTRL,
                       cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
 
+    /*
+     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
+     * and the hardware implements it.
+     */
+    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
+                      cp->extd.virt_ssbd && cpu_has_virt_ssbd ?
+                      MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
     svm_intercept_msr(v, MSR_PRED_CMD,
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
@@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
+/* Called with GIF=0. */
+void vmexit_virt_spec_ctrl(void)
+{
+    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
+
+    if ( cpu_has_virt_ssbd )
+    {
+        unsigned int lo, hi;
+
+        /*
+         * Need to read from the hardware because VIRT_SPEC_CTRL is not context
+         * switched by the hardware, and we allow the guest untrapped access to
+         * the register.
+         */
+        rdmsr(MSR_VIRT_SPEC_CTRL, lo, hi);
+        if ( val != lo )
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        current->arch.msrs->virt_spec_ctrl.raw = lo;
+    }
+}
+
+/* Called with GIF=0. */
+void vmentry_virt_spec_ctrl(void)
+{
+    unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
+
+    if ( val != (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
+        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index 7413febd7a..2240547b64 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -40,6 +40,7 @@ XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 XEN_CPUFEATURE(XEN_IBT,           X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */
+XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(28)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ab6fbb5051..460aabe84f 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -375,6 +375,16 @@ struct vcpu_msrs
      */
     uint32_t tsc_aux;
 
+    /*
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if !X86_FEATURE_AMD_SSBD)
+     *
+     * AMD only. Guest selected value, saved and restored on guest VM
+     * entry/exit.
+     */
+    struct {
+        uint32_t raw;
+    } virt_spec_ctrl;
+
     /*
      * 0xc00110{27,19-1b} MSR_AMD64_DR{0-3}_ADDRESS_MASK
      *
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f338bfe292..0d5ec877d1 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -406,9 +406,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
-           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
+           (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL"
+                                                       : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
@@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
+    /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
+
     /* If we have IBRS available, see whether we should use it. */
     if ( has_spec_ctrl && ibrs )
         default_xen_spec_ctrl |= SPEC_CTRL_IBRS;
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index b797c6bea1..0639b9faf2 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!S MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-15 14:18 [PATCH v2 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-03-15 14:18 ` [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
  2022-03-15 14:18 ` [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-03-15 14:18 ` Roger Pau Monne
  2022-03-28 14:21   ` Jan Beulich
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-03-15 14:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
allows for an unified way of exposing SSBD support to guests on AMD
hardware that's compatible migration wise, regardless of what
underlying mechanism is used to set SSBD.

Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
shared between threads on the same core, so there's extra logic in
order to synchronize the value and have SSBD set as long as one of the
threads in the core requires it to be set. Such logic also requires
extra storage for each thread state, which is allocated at
initialization time.

Do the context switching of the SSBD selection in LS_CFG between
hypervisor and guest in the same handler that's already used to switch
the value of VIRT_SPEC_CTRL in the hardware when supported.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Report legacy SSBD support using a global variable.
 - Use ro_after_init for ssbd_max_cores.
 - Handle boot_cpu_data.x86_num_siblings < 1.
 - Add comment regarding _irqsave usage in amd_set_legacy_ssbd.
---
 xen/arch/x86/cpu/amd.c         | 116 ++++++++++++++++++++++++++++-----
 xen/arch/x86/cpuid.c           |  10 +++
 xen/arch/x86/hvm/svm/svm.c     |  12 +++-
 xen/arch/x86/include/asm/amd.h |   4 ++
 xen/arch/x86/msr.c             |  22 ++++---
 xen/arch/x86/spec_ctrl.c       |   4 +-
 6 files changed, 141 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 4999f8be2b..63d466b1df 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -48,6 +48,7 @@ boolean_param("allow_unsafe", opt_allow_unsafe);
 
 /* Signal whether the ACPI C1E quirk is required. */
 bool __read_mostly amd_acpi_c1e_quirk;
+bool __ro_after_init amd_legacy_ssbd;
 
 static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo,
 				 unsigned int *hi)
@@ -685,23 +686,10 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
  * Refer to the AMD Speculative Store Bypass whitepaper:
  * https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
  */
-void amd_init_ssbd(const struct cpuinfo_x86 *c)
+static bool set_legacy_ssbd(const struct cpuinfo_x86 *c, bool enable)
 {
 	int bit = -1;
 
-	if (cpu_has_ssb_no)
-		return;
-
-	if (cpu_has_amd_ssbd) {
-		/* Handled by common MSR_SPEC_CTRL logic */
-		return;
-	}
-
-	if (cpu_has_virt_ssbd) {
-		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
-		return;
-	}
-
 	switch (c->x86) {
 	case 0x15: bit = 54; break;
 	case 0x16: bit = 33; break;
@@ -715,20 +703,114 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 		if (rdmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    val &= ~mask;
-			    if (opt_ssbd)
+			    if (enable)
 				    val |= mask;
 			    false;
 		    }) ||
 		    wrmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    rdmsrl(MSR_AMD64_LS_CFG, val);
-			    (val & mask) != (opt_ssbd * mask);
+			    (val & mask) != (enable * mask);
 		    }))
 			bit = -1;
 	}
 
-	if (bit < 0)
+	return bit >= 0;
+}
+
+void amd_init_ssbd(const struct cpuinfo_x86 *c)
+{
+	if (cpu_has_ssb_no)
+		return;
+
+	if (cpu_has_amd_ssbd) {
+		/* Handled by common MSR_SPEC_CTRL logic */
+		return;
+	}
+
+	if (cpu_has_virt_ssbd) {
+		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
+		return;
+	}
+
+	if (!set_legacy_ssbd(c, opt_ssbd))
+	{
 		printk_once(XENLOG_ERR "No SSBD controls available\n");
+		if (amd_legacy_ssbd)
+			panic("CPU feature mismatch: no legacy SSBD\n");
+	}
+	else if ( c == &boot_cpu_data )
+		amd_legacy_ssbd = true;
+}
+
+static struct ssbd_core {
+    spinlock_t lock;
+    unsigned int count;
+} *ssbd_core;
+static unsigned int __ro_after_init ssbd_max_cores;
+
+bool __init amd_setup_legacy_ssbd(void)
+{
+	unsigned int i;
+
+	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
+		return true;
+
+	/*
+	 * One could be forgiven for thinking that c->x86_max_cores is the
+	 * correct value to use here.
+	 *
+	 * However, that value is derived from the current configuration, and
+	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
+	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
+	 */
+	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
+		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
+		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
+	}
+	if (!ssbd_max_cores)
+		return false;
+
+	/* Max is two sockets for Fam17h hardware. */
+	ssbd_core = xzalloc_array(struct ssbd_core, ssbd_max_cores * 2);
+	if (!ssbd_core)
+		return false;
+
+	for (i = 0; i < ssbd_max_cores * 2; i++) {
+		spin_lock_init(&ssbd_core[i].lock);
+		/* Record initial state, also applies to any hotplug CPU. */
+		if (opt_ssbd)
+			ssbd_core[i].count = boot_cpu_data.x86_num_siblings;
+	}
+
+	return true;
+}
+
+void amd_set_legacy_ssbd(bool enable)
+{
+	const struct cpuinfo_x86 *c = &current_cpu_data;
+	struct ssbd_core *core;
+	unsigned long flags;
+
+	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
+		BUG_ON(!set_legacy_ssbd(c, enable));
+		return;
+	}
+
+	BUG_ON(c->phys_proc_id >= 2);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
+	core = &ssbd_core[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id];
+	/*
+	 * Use irqsave variant to make check_lock() happy. When called from
+	 * vm{exit,entry}_virt_spec_ctrl GIF=0, but the state of IF could be 1,
+	 * thus fooling the spinlock check.
+	 */
+	spin_lock_irqsave(&core->lock, flags);
+	core->count += enable ? 1 : -1;
+	ASSERT(core->count <= c->x86_num_siblings);
+	if (enable ? core->count == 1 : !core->count)
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&core->lock, flags);
 }
 
 void __init detect_zen2_null_seg_behaviour(void)
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 91e53284fc..e278fee689 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -537,6 +537,16 @@ static void __init calculate_hvm_max_policy(void)
     if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
         /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
         __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+    else
+        /*
+         * Expose VIRT_SSBD if VIRT_SPEC_CTRL is supported, as that implies the
+         * underlying hardware is capable of setting SSBD using
+         * non-architectural way or VIRT_SSBD is available.
+         *
+         * Note that if the hardware supports VIRT_SSBD natively this setting
+         * will just override an already set bit.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 73a0af599b..43fc9a3f8e 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -3132,7 +3132,12 @@ void vmexit_virt_spec_ctrl(void)
         if ( val != lo )
             wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
         current->arch.msrs->virt_spec_ctrl.raw = lo;
+
+        return;
     }
+
+    if ( val != current->arch.msrs->virt_spec_ctrl.raw )
+        amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
 }
 
 /* Called with GIF=0. */
@@ -3141,7 +3146,12 @@ void vmentry_virt_spec_ctrl(void)
     unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
 
     if ( val != (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+    {
+        if ( cpu_has_virt_ssbd )
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        else
+            amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
+    }
 }
 
 /*
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a82382e6bf..6a42f68542 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -151,4 +151,8 @@ void check_enable_amd_mmconf_dmi(void);
 extern bool amd_acpi_c1e_quirk;
 void amd_check_disable_c1e(unsigned int port, u8 value);
 
+extern bool amd_legacy_ssbd;
+bool amd_setup_legacy_ssbd(void);
+void amd_set_legacy_ssbd(bool enable);
+
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index b212acf29d..1f4b63a497 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -385,7 +385,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         if ( !cp->extd.virt_ssbd )
             goto gp_fault;
 
-        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        if ( cpu_has_amd_ssbd )
+            *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        else
+            *val = msrs->virt_spec_ctrl.raw;
         break;
 
     case MSR_AMD64_DE_CFG:
@@ -677,14 +680,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         if ( !cp->extd.virt_ssbd )
             goto gp_fault;
 
-        /*
-         * Only supports SSBD bit, the rest are ignored. Only modify the SSBD
-         * bit in case other bits are set.
-         */
-        if ( val & SPEC_CTRL_SSBD )
-            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        /* Only supports SSBD bit, the rest are ignored. */
+        if ( cpu_has_amd_ssbd )
+        {
+            /* Only modify the SSBD bit in case other bits are set. */
+            if ( val & SPEC_CTRL_SSBD )
+                msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+            else
+                msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        }
         else
-            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+            msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
         break;
 
     case MSR_AMD64_DE_CFG:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 0d5ec877d1..495e6f9405 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -22,6 +22,7 @@
 #include <xen/param.h>
 #include <xen/warning.h>
 
+#include <asm/amd.h>
 #include <asm/hvm/svm/svm.h>
 #include <asm/microcode.h>
 #include <asm/msr.h>
@@ -1073,7 +1074,8 @@ void __init init_speculation_mitigations(void)
     }
 
     /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
-    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd &&
+         (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) )
         setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
 
     /* If we have IBRS available, see whether we should use it. */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-15 14:18 ` [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-03-28 13:40   ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-03-28 13:40 UTC (permalink / raw)
  To: Roger Pau Monne, Andrew Cooper; +Cc: Wei Liu, xen-devel

On 15.03.2022 15:18, Roger Pau Monne wrote:
> Use the logic to set shadow SPEC_CTRL values in order to implement
> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> guests. This includes using the spec_ctrl vCPU MSR variable to store
> the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
> any SPEC_CTRL values being set by the guest.
> 
> On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
> default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
> policy so it can be enabled for compatibility purposes.
> 
> Note that '!' is used in order to tag the VIRT_SSBD feature as
> specially handled. It's possible for the feature to be available to
> guests on hardware that doesn't support it natively, for example when
> implemented as done by this patch on top of SPEC_CTRL.SSBD (AMD_SSBD).

Except for this ! aspect the change looks good to me, but in order to
give my R-b this aspect needs sorting. Andrew - what are your thoughts
here? The reason cited by Roger doesn't look to be one that I so far
understood would require use of !, but your intentions may well have
been different from what my current understanding is.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-15 14:18 ` [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-03-28 14:02   ` Jan Beulich
  2022-03-28 15:19     ` Roger Pau Monné
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2022-03-28 14:02 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 15.03.2022 15:18, Roger Pau Monne wrote:
> Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
> hardware has support for it. This requires adding logic in the
> vm{entry,exit} paths for SVM in order to context switch between the
> hypervisor value and the guest one. The added handlers for context
> switch will also be used for the legacy SSBD support.
> 
> Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
> to signal whether VIRT_SPEC_CTRL needs to be handled on guest
> vm{entry,exit}.
> 
> Note the change in the handling of VIRT_SSBD in the featureset
> description. The change from 's' to 'S' is due to the fact that now if
> VIRT_SSBD is exposed by the hardware it can be passed through to HVM
> guests.

But lower vs upper case mean "(do not) expose by default", not whether
underlying hardware exposes the feature. In patch 1 you actually used
absence in underlying hardware to justify !, not s.

> @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
>      svm_intercept_msr(v, MSR_SPEC_CTRL,
>                        cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
>  
> +    /*
> +     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
> +     * and the hardware implements it.
> +     */
> +    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
> +                      cp->extd.virt_ssbd && cpu_has_virt_ssbd ?

Despite giving the guest direct access guest_{rd,wr}msr() can be hit
for such guests. Don't you need to update what patch 1 added there?

Also, is there a reason the qualifier here is not in sync with ...

> @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>      vmcb_set_vintr(vmcb, intr);
>  }
>  
> +/* Called with GIF=0. */
> +void vmexit_virt_spec_ctrl(void)
> +{
> +    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
> +
> +    if ( cpu_has_virt_ssbd )

... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in
turn is enabled only when cpu_has_virt_ssbd, it would seem to me that
if any asymmetry was okay here, then using cp->extd.virt_ssbd without
cpu_has_virt_ssbd.

> @@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void)
>              setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
>      }
>  
> +    /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
> +    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
> +        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);

In cpuid.c the comment (matching the code there) talks about exposing
by default. I can't bring this in line with the use of !cpu_has_amd_ssbd
here.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-15 14:18 ` [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-03-28 14:21   ` Jan Beulich
  2022-03-28 15:24     ` Roger Pau Monné
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2022-03-28 14:21 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 15.03.2022 15:18, Roger Pau Monne wrote:
> +void amd_init_ssbd(const struct cpuinfo_x86 *c)
> +{
> +	if (cpu_has_ssb_no)
> +		return;
> +
> +	if (cpu_has_amd_ssbd) {
> +		/* Handled by common MSR_SPEC_CTRL logic */
> +		return;
> +	}
> +
> +	if (cpu_has_virt_ssbd) {
> +		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
> +		return;
> +	}
> +
> +	if (!set_legacy_ssbd(c, opt_ssbd))
> +	{

Nit: In this file the brace belongs on the earlier line and ...

>  		printk_once(XENLOG_ERR "No SSBD controls available\n");
> +		if (amd_legacy_ssbd)
> +			panic("CPU feature mismatch: no legacy SSBD\n");
> +	}
> +	else if ( c == &boot_cpu_data )

... you want to omit the blanks immediately inside the parentheses here.

> +		amd_legacy_ssbd = true;
> +}
> +
> +static struct ssbd_core {
> +    spinlock_t lock;
> +    unsigned int count;
> +} *ssbd_core;
> +static unsigned int __ro_after_init ssbd_max_cores;
> +
> +bool __init amd_setup_legacy_ssbd(void)
> +{
> +	unsigned int i;
> +
> +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
> +		return true;
> +
> +	/*
> +	 * One could be forgiven for thinking that c->x86_max_cores is the
> +	 * correct value to use here.
> +	 *
> +	 * However, that value is derived from the current configuration, and
> +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> +	 */
> +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> +	}
> +	if (!ssbd_max_cores)
> +		return false;
> +
> +	/* Max is two sockets for Fam17h hardware. */
> +	ssbd_core = xzalloc_array(struct ssbd_core, ssbd_max_cores * 2);

If I'm not mistaken this literal 2, ...

> +	if (!ssbd_core)
> +		return false;
> +
> +	for (i = 0; i < ssbd_max_cores * 2; i++) {

... this one, and ...

> +		spin_lock_init(&ssbd_core[i].lock);
> +		/* Record initial state, also applies to any hotplug CPU. */
> +		if (opt_ssbd)
> +			ssbd_core[i].count = boot_cpu_data.x86_num_siblings;
> +	}
> +
> +	return true;
> +}
> +
> +void amd_set_legacy_ssbd(bool enable)
> +{
> +	const struct cpuinfo_x86 *c = &current_cpu_data;
> +	struct ssbd_core *core;
> +	unsigned long flags;
> +
> +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
> +		BUG_ON(!set_legacy_ssbd(c, enable));
> +		return;
> +	}
> +
> +	BUG_ON(c->phys_proc_id >= 2);

.. this one are all referring to the same thing. Please use a #define to
make the connection obvious.

> @@ -677,14 +680,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>          if ( !cp->extd.virt_ssbd )
>              goto gp_fault;
>  
> -        /*
> -         * Only supports SSBD bit, the rest are ignored. Only modify the SSBD
> -         * bit in case other bits are set.
> -         */
> -        if ( val & SPEC_CTRL_SSBD )
> -            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
> +        /* Only supports SSBD bit, the rest are ignored. */
> +        if ( cpu_has_amd_ssbd )
> +        {
> +            /* Only modify the SSBD bit in case other bits are set. */

While more a comment on the earlier patch introducing this wording, it
occurred to me only here that this is ambiguous: It can also be read as
"Only modify the SSBD bit as long as other bits are set."

> +            if ( val & SPEC_CTRL_SSBD )
> +                msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
> +            else
> +                msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> +        }
>          else
> -            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> +            msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;

I also think the comment applies equally to the "else" logic, so perhaps
the comment would best remain as is (and merely be re-worded in the
earlier patch)?

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-28 14:02   ` Jan Beulich
@ 2022-03-28 15:19     ` Roger Pau Monné
  2022-03-28 15:26       ` Jan Beulich
  0 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2022-03-28 15:19 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Mon, Mar 28, 2022 at 04:02:40PM +0200, Jan Beulich wrote:
> On 15.03.2022 15:18, Roger Pau Monne wrote:
> > Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
> > hardware has support for it. This requires adding logic in the
> > vm{entry,exit} paths for SVM in order to context switch between the
> > hypervisor value and the guest one. The added handlers for context
> > switch will also be used for the legacy SSBD support.
> > 
> > Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
> > to signal whether VIRT_SPEC_CTRL needs to be handled on guest
> > vm{entry,exit}.
> > 
> > Note the change in the handling of VIRT_SSBD in the featureset
> > description. The change from 's' to 'S' is due to the fact that now if
> > VIRT_SSBD is exposed by the hardware it can be passed through to HVM
> > guests.
> 
> But lower vs upper case mean "(do not) expose by default", not whether
> underlying hardware exposes the feature. In patch 1 you actually used
> absence in underlying hardware to justify !, not s.

Maybe I'm getting lost with all this !, lower case and upper case
stuff.

Patch 1 uses '!s' to account for:
 * '!': the feature might be exposed to guests even when not present
   on the host hardware.
 * 's': the feature won't be exposed by default.

Which I think matches what is implemented in patch 1 where VIRT_SSBD
is possibly exposed to guest when running on hardware that don't
necessarily have VIRT_SSBD (ie: because we use AMD_SSBD in order to
implement VIRT_SSBD).

Patch 2 changes the 's' to 'S' because this patch introduces support
to expose VIRT_SSBD to guests by default when the host (virtual)
hardware also supports it.

Maybe my understanding of the annotations is incorrect.

> > @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
> >      svm_intercept_msr(v, MSR_SPEC_CTRL,
> >                        cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
> >  
> > +    /*
> > +     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
> > +     * and the hardware implements it.
> > +     */
> > +    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
> > +                      cp->extd.virt_ssbd && cpu_has_virt_ssbd ?
> 
> Despite giving the guest direct access guest_{rd,wr}msr() can be hit
> for such guests. Don't you need to update what patch 1 added there?

Indeed, I should add the chunk that's added in the next patch.

> Also, is there a reason the qualifier here is not in sync with ...
> 
> > @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
> >      vmcb_set_vintr(vmcb, intr);
> >  }
> >  
> > +/* Called with GIF=0. */
> > +void vmexit_virt_spec_ctrl(void)
> > +{
> > +    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
> > +
> > +    if ( cpu_has_virt_ssbd )
> 
> ... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in
> turn is enabled only when cpu_has_virt_ssbd, it would seem to me that
> if any asymmetry was okay here, then using cp->extd.virt_ssbd without
> cpu_has_virt_ssbd.

Using just cp->extd.virt_ssbd will be wrong when next patch also
introduces support for exposing VIRT_SSBD by setting SSBD using the
non-architectural method.

We need to context switch just based on cpu_has_virt_ssbd because the
running guest might not get VIRT_SSBD offered (cp->extd.virt_ssbd ==
false) but Xen might be using SSBD itself so it needs to context
switch in order to activate it. Ie: if !cp->extd.virt_ssbd then the
guest will always run with SSBD disabled, but Xen might not.

> > @@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void)
> >              setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
> >      }
> >  
> > +    /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
> > +    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
> > +        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
> 
> In cpuid.c the comment (matching the code there) talks about exposing
> by default. I can't bring this in line with the use of !cpu_has_amd_ssbd
> here.

Exposing by default if !AMD_SSBD. Otherwise VIRT_SSBD is only in the
max policy, and the default policy will instead contain AMD_SSBD.

If AMD_SSBD is available it implies that X86_FEATURE_SC_MSR_HVM is
already set (or otherwise opt_msr_sc_hvm is disabled), and hence the
way to implement VIRT_SSBD is by using SPEC_CTRL.

I think I need to fix the intercept in that case, so it's:

    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
		      !cpu_has_amd_ssbd ?
                      MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);

Because it AMD_SSBD is available VIRT_SSBD will be implemented using
SPEC_CTRL, regardless of whether VIRT_SSBD is also available natively.

Hope all this makes sense, I find it quite complex due to all the
interactions.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-28 14:21   ` Jan Beulich
@ 2022-03-28 15:24     ` Roger Pau Monné
  2022-03-28 15:32       ` Jan Beulich
  0 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2022-03-28 15:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Mon, Mar 28, 2022 at 04:21:02PM +0200, Jan Beulich wrote:
> On 15.03.2022 15:18, Roger Pau Monne wrote:
> > +void amd_init_ssbd(const struct cpuinfo_x86 *c)
> > +{
> > +	if (cpu_has_ssb_no)
> > +		return;
> > +
> > +	if (cpu_has_amd_ssbd) {
> > +		/* Handled by common MSR_SPEC_CTRL logic */
> > +		return;
> > +	}
> > +
> > +	if (cpu_has_virt_ssbd) {
> > +		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
> > +		return;
> > +	}
> > +
> > +	if (!set_legacy_ssbd(c, opt_ssbd))
> > +	{
> 
> Nit: In this file the brace belongs on the earlier line and ...
> 
> >  		printk_once(XENLOG_ERR "No SSBD controls available\n");
> > +		if (amd_legacy_ssbd)
> > +			panic("CPU feature mismatch: no legacy SSBD\n");
> > +	}
> > +	else if ( c == &boot_cpu_data )
> 
> ... you want to omit the blanks immediately inside the parentheses here.

Ouch, yes.

> > +		amd_legacy_ssbd = true;
> > +}
> > +
> > +static struct ssbd_core {
> > +    spinlock_t lock;
> > +    unsigned int count;
> > +} *ssbd_core;
> > +static unsigned int __ro_after_init ssbd_max_cores;
> > +
> > +bool __init amd_setup_legacy_ssbd(void)
> > +{
> > +	unsigned int i;
> > +
> > +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings <= 1)
> > +		return true;
> > +
> > +	/*
> > +	 * One could be forgiven for thinking that c->x86_max_cores is the
> > +	 * correct value to use here.
> > +	 *
> > +	 * However, that value is derived from the current configuration, and
> > +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> > +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> > +	 */
> > +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> > +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> > +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> > +	}
> > +	if (!ssbd_max_cores)
> > +		return false;
> > +
> > +	/* Max is two sockets for Fam17h hardware. */
> > +	ssbd_core = xzalloc_array(struct ssbd_core, ssbd_max_cores * 2);
> 
> If I'm not mistaken this literal 2, ...
> 
> > +	if (!ssbd_core)
> > +		return false;
> > +
> > +	for (i = 0; i < ssbd_max_cores * 2; i++) {
> 
> ... this one, and ...
> 
> > +		spin_lock_init(&ssbd_core[i].lock);
> > +		/* Record initial state, also applies to any hotplug CPU. */
> > +		if (opt_ssbd)
> > +			ssbd_core[i].count = boot_cpu_data.x86_num_siblings;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +void amd_set_legacy_ssbd(bool enable)
> > +{
> > +	const struct cpuinfo_x86 *c = &current_cpu_data;
> > +	struct ssbd_core *core;
> > +	unsigned long flags;
> > +
> > +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
> > +		BUG_ON(!set_legacy_ssbd(c, enable));
> > +		return;
> > +	}
> > +
> > +	BUG_ON(c->phys_proc_id >= 2);
> 
> .. this one are all referring to the same thing. Please use a #define to
> make the connection obvious.

Indeed. That's the maximum number of sockets possible with that CPU
family (2).

> > @@ -677,14 +680,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> >          if ( !cp->extd.virt_ssbd )
> >              goto gp_fault;
> >  
> > -        /*
> > -         * Only supports SSBD bit, the rest are ignored. Only modify the SSBD
> > -         * bit in case other bits are set.
> > -         */
> > -        if ( val & SPEC_CTRL_SSBD )
> > -            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
> > +        /* Only supports SSBD bit, the rest are ignored. */
> > +        if ( cpu_has_amd_ssbd )
> > +        {
> > +            /* Only modify the SSBD bit in case other bits are set. */
> 
> While more a comment on the earlier patch introducing this wording, it
> occurred to me only here that this is ambiguous: It can also be read as
> "Only modify the SSBD bit as long as other bits are set."

Hm, no, that's not what I meant. I meant to note that here we are
careful to only modify the SSBD bit of spec_ctrl, because other bits
might be used for other purposes. We can't do:

msrs->spec_ctrl.raw = SPEC_CTRL_SSBD;

But maybe this doesn't require a comment, as it seems to raise more
questions than answer?

> > +            if ( val & SPEC_CTRL_SSBD )
> > +                msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
> > +            else
> > +                msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> > +        }
> >          else
> > -            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> > +            msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
> 
> I also think the comment applies equally to the "else" logic, so perhaps
> the comment would best remain as is (and merely be re-worded in the
> earlier patch)?

Sure, let's see if we can get consensus on a proper wording.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-28 15:19     ` Roger Pau Monné
@ 2022-03-28 15:26       ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-03-28 15:26 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 28.03.2022 17:19, Roger Pau Monné wrote:
> On Mon, Mar 28, 2022 at 04:02:40PM +0200, Jan Beulich wrote:
>> On 15.03.2022 15:18, Roger Pau Monne wrote:
>>> Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
>>> hardware has support for it. This requires adding logic in the
>>> vm{entry,exit} paths for SVM in order to context switch between the
>>> hypervisor value and the guest one. The added handlers for context
>>> switch will also be used for the legacy SSBD support.
>>>
>>> Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
>>> to signal whether VIRT_SPEC_CTRL needs to be handled on guest
>>> vm{entry,exit}.
>>>
>>> Note the change in the handling of VIRT_SSBD in the featureset
>>> description. The change from 's' to 'S' is due to the fact that now if
>>> VIRT_SSBD is exposed by the hardware it can be passed through to HVM
>>> guests.
>>
>> But lower vs upper case mean "(do not) expose by default", not whether
>> underlying hardware exposes the feature. In patch 1 you actually used
>> absence in underlying hardware to justify !, not s.
> 
> Maybe I'm getting lost with all this !, lower case and upper case
> stuff.
> 
> Patch 1 uses '!s' to account for:
>  * '!': the feature might be exposed to guests even when not present
>    on the host hardware.
>  * 's': the feature won't be exposed by default.
> 
> Which I think matches what is implemented in patch 1 where VIRT_SSBD
> is possibly exposed to guest when running on hardware that don't
> necessarily have VIRT_SSBD (ie: because we use AMD_SSBD in order to
> implement VIRT_SSBD).
> 
> Patch 2 changes the 's' to 'S' because this patch introduces support
> to expose VIRT_SSBD to guests by default when the host (virtual)
> hardware also supports it.

Hmm, so maybe the wording in the description is merely a little
unfortunate.

>>> @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
>>>      svm_intercept_msr(v, MSR_SPEC_CTRL,
>>>                        cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
>>>  
>>> +    /*
>>> +     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
>>> +     * and the hardware implements it.
>>> +     */
>>> +    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
>>> +                      cp->extd.virt_ssbd && cpu_has_virt_ssbd ?
>>
>> Despite giving the guest direct access guest_{rd,wr}msr() can be hit
>> for such guests. Don't you need to update what patch 1 added there?
> 
> Indeed, I should add the chunk that's added in the next patch.
> 
>> Also, is there a reason the qualifier here is not in sync with ...
>>
>>> @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>>      vmcb_set_vintr(vmcb, intr);
>>>  }
>>>  
>>> +/* Called with GIF=0. */
>>> +void vmexit_virt_spec_ctrl(void)
>>> +{
>>> +    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
>>> +
>>> +    if ( cpu_has_virt_ssbd )
>>
>> ... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in
>> turn is enabled only when cpu_has_virt_ssbd, it would seem to me that
>> if any asymmetry was okay here, then using cp->extd.virt_ssbd without
>> cpu_has_virt_ssbd.
> 
> Using just cp->extd.virt_ssbd will be wrong when next patch also
> introduces support for exposing VIRT_SSBD by setting SSBD using the
> non-architectural method.

Well, if the next patch needs to make adjustments here, then that's
fine but different from what's needed at this point. However, ...

> We need to context switch just based on cpu_has_virt_ssbd because the
> running guest might not get VIRT_SSBD offered (cp->extd.virt_ssbd ==
> false) but Xen might be using SSBD itself so it needs to context
> switch in order to activate it. Ie: if !cp->extd.virt_ssbd then the
> guest will always run with SSBD disabled, but Xen might not.

... yes, I see.

> Hope all this makes sense,

It does, and ...

> I find it quite complex due to all the interactions.

... yes, I definitely agree.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-28 15:24     ` Roger Pau Monné
@ 2022-03-28 15:32       ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-03-28 15:32 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 28.03.2022 17:24, Roger Pau Monné wrote:
> On Mon, Mar 28, 2022 at 04:21:02PM +0200, Jan Beulich wrote:
>> On 15.03.2022 15:18, Roger Pau Monne wrote:
>>> @@ -677,14 +680,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>>          if ( !cp->extd.virt_ssbd )
>>>              goto gp_fault;
>>>  
>>> -        /*
>>> -         * Only supports SSBD bit, the rest are ignored. Only modify the SSBD
>>> -         * bit in case other bits are set.
>>> -         */
>>> -        if ( val & SPEC_CTRL_SSBD )
>>> -            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
>>> +        /* Only supports SSBD bit, the rest are ignored. */
>>> +        if ( cpu_has_amd_ssbd )
>>> +        {
>>> +            /* Only modify the SSBD bit in case other bits are set. */
>>
>> While more a comment on the earlier patch introducing this wording, it
>> occurred to me only here that this is ambiguous: It can also be read as
>> "Only modify the SSBD bit as long as other bits are set."
> 
> Hm, no, that's not what I meant. I meant to note that here we are
> careful to only modify the SSBD bit of spec_ctrl, because other bits
> might be used for other purposes.

Right, I understand that's what you mean, and because I understand
the ambiguity also slipped my attention in the earlier patch.

> We can't do:
> 
> msrs->spec_ctrl.raw = SPEC_CTRL_SSBD;
> 
> But maybe this doesn't require a comment, as it seems to raise more
> questions than answer?

I wouldn't mind if (in the earlier patch) you simply dropped the 2nd
sentence. Or alternatively how about "Also only record the SSBD bit
to return for future reads" or something along these lines?

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-03-28 15:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-15 14:18 [PATCH v2 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-03-15 14:18 ` [PATCH v2 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
2022-03-28 13:40   ` Jan Beulich
2022-03-15 14:18 ` [PATCH v2 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-03-28 14:02   ` Jan Beulich
2022-03-28 15:19     ` Roger Pau Monné
2022-03-28 15:26       ` Jan Beulich
2022-03-15 14:18 ` [PATCH v2 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
2022-03-28 14:21   ` Jan Beulich
2022-03-28 15:24     ` Roger Pau Monné
2022-03-28 15:32       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.