All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
@ 2022-04-27 10:47 Roger Pau Monne
  2022-04-27 10:47 ` [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Roger Pau Monne @ 2022-04-27 10:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Hello,

The following series implements support for MSR_VIRT_SPEC_CTRL
(VIRT_SSBD) on different AMD CPU families.

Note that the support is added backwards, starting with the newer CPUs
that support MSR_SPEC_CTRL and moving to the older ones either using
MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.

Xen is still free to use it's own SSBD setting, as the selection is
context switched on vm{entry,exit}.

On Zen2 and later, SPEC_CTRL.SSBD should exist and should be used in
preference to VIRT_SPEC_CTRL.SSBD.  However, for migration
compatibility, Xen offers VIRT_SSBD to guests (in the max cpuid policy,
not default) implemented in terms of SPEC_CTRL.SSBD.

On Fam15h thru Zen1, Xen exposes VIRT_SSBD to guests by default to
abstract away the model and/or hypervisor specific differences in
MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.

So the implementation of VIRT_SSBD exposed to HVM guests will use one of
the following underlying mechanisms, in the preference order listed
below:

 * SPEC_CTRL.SSBD: patch 1
 * VIRT_SPEC_CTRL.SSBD: patch 2.
 * Non-architectural way using LS_CFG: patch 3.

Roger Pau Monne (3):
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD

 xen/arch/x86/cpu/amd.c                      | 119 +++++++++++++++++---
 xen/arch/x86/cpuid.c                        |  28 +++++
 xen/arch/x86/hvm/hvm.c                      |   1 +
 xen/arch/x86/hvm/svm/entry.S                |   8 ++
 xen/arch/x86/hvm/svm/svm.c                  |  40 +++++++
 xen/arch/x86/include/asm/amd.h              |   4 +
 xen/arch/x86/include/asm/cpufeatures.h      |   1 +
 xen/arch/x86/include/asm/msr.h              |  14 +++
 xen/arch/x86/msr.c                          |  26 +++++
 xen/arch/x86/spec_ctrl.c                    |  12 +-
 xen/include/public/arch-x86/cpufeatureset.h |   2 +-
 11 files changed, 236 insertions(+), 19 deletions(-)

-- 
2.35.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-04-27 10:47 [PATCH v4 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-04-27 10:47 ` Roger Pau Monne
  2022-04-28 16:11   ` Jan Beulich
  2022-04-27 10:47 ` [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-04-27 10:47 ` [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-04-27 10:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Use the logic to set shadow SPEC_CTRL values in order to implement
support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
guests. This includes using the spec_ctrl vCPU MSR variable to store
the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
any SPEC_CTRL values being set by the guest.

On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
policy so it can be enabled for compatibility purposes.

Use '!' to annotate the feature in order to express that the presence
of the bit is not directly tied to its value in the host policy.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Use '!' to annotate the feature.

Changes since v2:
 - Reword reasoning for using '!s'.
 - Trim comment about only setting SSBD bit in spec_ctrl.raw.

Changes since v1:
 - Only expose VIRT_SSBD if AMD_SSBD is available on the host.
 - Revert change to msr-sc= command line option documentation.
 - Only set or clear the SSBD bit of spec_ctrl.
---
 xen/arch/x86/cpuid.c                        |  7 +++++++
 xen/arch/x86/hvm/hvm.c                      |  1 +
 xen/arch/x86/include/asm/msr.h              |  4 ++++
 xen/arch/x86/msr.c                          | 18 ++++++++++++++++++
 xen/arch/x86/spec_ctrl.c                    |  3 ++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 6 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 7e0b395698..979dcf8164 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -550,6 +550,13 @@ static void __init calculate_hvm_max_policy(void)
         __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
         __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
     }
+    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        /*
+         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
+         * and implemented using the former. Expose in the max policy only as
+         * the preference is for guests to use SPEC_CTRL.SSBD if available.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * With VT-x, some features are only supported by Xen if dedicated
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b16fb4cd8..db8f95ef7c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1334,6 +1334,7 @@ static const uint32_t msrs_to_send[] = {
     MSR_INTEL_MISC_FEATURES_ENABLES,
     MSR_IA32_BNDCFGS,
     MSR_IA32_XSS,
+    MSR_VIRT_SPEC_CTRL,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ce4fe51afe..ab6fbb5051 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -291,6 +291,7 @@ struct vcpu_msrs
 {
     /*
      * 0x00000048 - MSR_SPEC_CTRL
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if X86_FEATURE_AMD_SSBD)
      *
      * For PV guests, this holds the guest kernel value.  It is accessed on
      * every entry/exit path.
@@ -306,6 +307,9 @@ struct vcpu_msrs
      * We must clear/restore Xen's value before/after VMRUN to avoid unduly
      * influencing the guest.  In order to support "behind the guest's back"
      * protections, we load this value (commonly 0) before VMRUN.
+     *
+     * Once of such "behind the guest's back" usages is setting SPEC_CTRL.SSBD
+     * if the guest sets VIRT_SPEC_CTRL.SSBD.
      */
     struct {
         uint32_t raw;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 01a15857b7..72c175fd8b 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -381,6 +381,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
                ? K8_HWCR_TSC_FREQ_SEL : 0;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
@@ -666,6 +673,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        /* Only supports SSBD bit, the rest are ignored. */
+        if ( val & SPEC_CTRL_SSBD )
+            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        else
+            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         /*
          * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 1408e4c7ab..f338bfe292 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -402,12 +402,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
      * mitigation support for guests.
      */
 #ifdef CONFIG_HVM
-    printk("  Support for HVM VMs:%s%s%s%s%s\n",
+    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
+           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 9cee4b439e..5aa3c82fc6 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!  MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-04-27 10:47 [PATCH v4 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-04-27 10:47 ` [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-04-27 10:47 ` Roger Pau Monne
  2022-04-29 10:35   ` Jan Beulich
  2022-04-27 10:47 ` [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-04-27 10:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Allow HVM guests access to MSR_VIRT_SPEC_CTRL if the platform Xen is
running on has support for it.  This requires adding logic in the
vm{entry,exit} paths for SVM in order to context switch between the
hypervisor value and the guest one.  The added handlers for context
switch will also be used for the legacy SSBD support.

Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM)
to signal whether VIRT_SPEC_CTRL needs to be handled on guest
vm{entry,exit}.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Always trap write accesses to VIRT_SPEC_CTRL in order to cache the
   guest setting.
 - Do not use the 'S' annotation for the VIRT_SSBD feature.

Changes since v2:
 - Reword part of the commit message regarding annotation change.
 - Fix MSR intercept.
 - Add handling of VIRT_SPEC_CTRL to guest_{rd,wr}msr when using
   VIRT_SSBD also.

Changes since v1:
 - Introduce virt_spec_ctrl vCPU field.
 - Context switch VIRT_SPEC_CTRL on vmentry/vmexit separately from
   SPEC_CTRL.
---
 xen/arch/x86/cpuid.c                   | 11 ++++++++
 xen/arch/x86/hvm/svm/entry.S           |  8 ++++++
 xen/arch/x86/hvm/svm/svm.c             | 35 ++++++++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeatures.h |  1 +
 xen/arch/x86/include/asm/msr.h         | 10 ++++++++
 xen/arch/x86/msr.c                     | 16 +++++++++---
 xen/arch/x86/spec_ctrl.c               |  9 ++++++-
 7 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 979dcf8164..9a8c73f067 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -541,6 +541,10 @@ static void __init calculate_hvm_max_policy(void)
          raw_cpuid_policy.basic.sep )
         __set_bit(X86_FEATURE_SEP, hvm_featureset);
 
+    if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+        /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
+        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
      * availability, or admin choice), hide the feature.
@@ -597,6 +601,13 @@ static void __init calculate_hvm_def_policy(void)
     guest_common_feature_adjustments(hvm_featureset);
     guest_common_default_feature_adjustments(hvm_featureset);
 
+    /*
+     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
+     * VIRT_SC_MSR_HVM is set.
+     */
+    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 4ae55a2ef6..2f63a2e3c6 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -19,6 +19,8 @@
 
         .file "svm/entry.S"
 
+#include <xen/lib.h>
+
 #include <asm/asm_defns.h>
 #include <asm/page.h>
 
@@ -57,6 +59,9 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
+        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -114,6 +119,9 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
+                        X86_FEATURE_VIRT_SC_MSR_HVM
+
         stgi
 GLOBAL(svm_stgi_label)
         mov  %rsp,%rdi
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2455835eda..e15c9754d7 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -52,6 +52,7 @@
 #include <asm/hvm/svm/svmdebug.h>
 #include <asm/hvm/svm/nestedsvm.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/spec_ctrl.h>
 #include <asm/x86_emulate.h>
 #include <public/sched.h>
 #include <asm/hvm/vpt.h>
@@ -610,6 +611,16 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     svm_intercept_msr(v, MSR_SPEC_CTRL,
                       cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
 
+    /*
+     * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
+     * setting and avoid having to perform a rdmsr on vmexit to get the guest
+     * setting even if VIRT_SSBD is offered to Xen itself.
+     */
+    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
+                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
+                      !cpu_has_amd_ssbd ?
+                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
+
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
     svm_intercept_msr(v, MSR_PRED_CMD,
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
@@ -3105,6 +3116,30 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
+/* Called with GIF=0. */
+void vmexit_virt_spec_ctrl(void)
+{
+    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
+
+    if ( val == current->arch.msrs->virt_spec_ctrl.raw )
+        return;
+
+    if ( cpu_has_virt_ssbd )
+        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+}
+
+/* Called with GIF=0. */
+void vmentry_virt_spec_ctrl(void)
+{
+    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
+
+    if ( val == current->arch.msrs->virt_spec_ctrl.raw )
+        return;
+
+    if ( cpu_has_virt_ssbd )
+        wrmsr(MSR_VIRT_SPEC_CTRL, current->arch.msrs->virt_spec_ctrl.raw, 0);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index 7413febd7a..2240547b64 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -40,6 +40,7 @@ XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 XEN_CPUFEATURE(XEN_IBT,           X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */
+XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(28)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ab6fbb5051..460aabe84f 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -375,6 +375,16 @@ struct vcpu_msrs
      */
     uint32_t tsc_aux;
 
+    /*
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL (if !X86_FEATURE_AMD_SSBD)
+     *
+     * AMD only. Guest selected value, saved and restored on guest VM
+     * entry/exit.
+     */
+    struct {
+        uint32_t raw;
+    } virt_spec_ctrl;
+
     /*
      * 0xc00110{27,19-1b} MSR_AMD64_DR{0-3}_ADDRESS_MASK
      *
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 72c175fd8b..a1e268eea9 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -385,7 +385,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         if ( !cp->extd.virt_ssbd )
             goto gp_fault;
 
-        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        if ( cpu_has_amd_ssbd )
+            *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        else
+            *val = msrs->virt_spec_ctrl.raw;
         break;
 
     case MSR_AMD64_DE_CFG:
@@ -678,10 +681,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             goto gp_fault;
 
         /* Only supports SSBD bit, the rest are ignored. */
-        if ( val & SPEC_CTRL_SSBD )
-            msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+        if ( cpu_has_amd_ssbd )
+        {
+            if ( val & SPEC_CTRL_SSBD )
+                msrs->spec_ctrl.raw |= SPEC_CTRL_SSBD;
+            else
+                msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+        }
         else
-            msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
+            msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
         break;
 
     case MSR_AMD64_DE_CFG:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f338bfe292..0d5ec877d1 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -406,9 +406,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
-           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
+           (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL"
+                                                       : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
@@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
+    /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
+
     /* If we have IBRS available, see whether we should use it. */
     if ( has_spec_ctrl && ibrs )
         default_xen_spec_ctrl |= SPEC_CTRL_IBRS;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-27 10:47 [PATCH v4 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-04-27 10:47 ` [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
  2022-04-27 10:47 ` [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-04-27 10:47 ` Roger Pau Monne
  2022-04-29 10:59   ` Jan Beulich
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2022-04-27 10:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Expose VIRT_SSBD to guests if the hardware supports setting SSBD in
the LS_CFG MSR (a.k.a. non-architectural way). Different AMD CPU
families use different bits in LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD
allows for an unified way of exposing SSBD support to guests on AMD
hardware that's compatible migration wise, regardless of what
underlying mechanism is used to set SSBD.

Note that on AMD Family 17h and Hygon Family 18h processors the value
of SSBD in LS_CFG is shared between threads on the same core, so
there's extra logic in order to synchronize the value and have SSBD
set as long as one of the threads in the core requires it to be set.
Such logic also requires extra storage for each thread state, which is
allocated at initialization time.

Do the context switching of the SSBD selection in LS_CFG between
hypervisor and guest in the same handler that's already used to switch
the value of VIRT_SPEC_CTRL.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Align ssbd per-core struct to a cache line.
 - Open code a simple spinlock to avoid playing tricks with the lock
   detector.
 - s/ssbd_core/ssbd_ls_cfg/.
 - Fix log message wording.
 - Fix define name and remove comment.
 - Also handle Hygon processors (Fam18h).
 - Add changelog entry.

Changes since v2:
 - Fix codding style issues.
 - Use AMD_ZEN1_MAX_SOCKETS to define the max number of possible
   sockets in Zen1 systems.

Changes since v1:
 - Report legacy SSBD support using a global variable.
 - Use ro_after_init for ssbd_max_cores.
 - Handle boot_cpu_data.x86_num_siblings < 1.
 - Add comment regarding _irqsave usage in amd_set_legacy_ssbd.
---
 xen/arch/x86/cpu/amd.c         | 119 ++++++++++++++++++++++++++++-----
 xen/arch/x86/cpuid.c           |  10 +++
 xen/arch/x86/hvm/svm/svm.c     |   5 ++
 xen/arch/x86/include/asm/amd.h |   4 ++
 xen/arch/x86/spec_ctrl.c       |   4 +-
 5 files changed, 124 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 4999f8be2b..a911e2e50a 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -48,6 +48,7 @@ boolean_param("allow_unsafe", opt_allow_unsafe);
 
 /* Signal whether the ACPI C1E quirk is required. */
 bool __read_mostly amd_acpi_c1e_quirk;
+bool __ro_after_init amd_legacy_ssbd;
 
 static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo,
 				 unsigned int *hi)
@@ -685,23 +686,10 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
  * Refer to the AMD Speculative Store Bypass whitepaper:
  * https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
  */
-void amd_init_ssbd(const struct cpuinfo_x86 *c)
+static bool set_legacy_ssbd(const struct cpuinfo_x86 *c, bool enable)
 {
 	int bit = -1;
 
-	if (cpu_has_ssb_no)
-		return;
-
-	if (cpu_has_amd_ssbd) {
-		/* Handled by common MSR_SPEC_CTRL logic */
-		return;
-	}
-
-	if (cpu_has_virt_ssbd) {
-		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
-		return;
-	}
-
 	switch (c->x86) {
 	case 0x15: bit = 54; break;
 	case 0x16: bit = 33; break;
@@ -715,20 +703,117 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 		if (rdmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    val &= ~mask;
-			    if (opt_ssbd)
+			    if (enable)
 				    val |= mask;
 			    false;
 		    }) ||
 		    wrmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    rdmsrl(MSR_AMD64_LS_CFG, val);
-			    (val & mask) != (opt_ssbd * mask);
+			    (val & mask) != (enable * mask);
 		    }))
 			bit = -1;
 	}
 
-	if (bit < 0)
+	return bit >= 0;
+}
+
+void amd_init_ssbd(const struct cpuinfo_x86 *c)
+{
+	if (cpu_has_ssb_no)
+		return;
+
+	if (cpu_has_amd_ssbd) {
+		/* Handled by common MSR_SPEC_CTRL logic */
+		return;
+	}
+
+	if (cpu_has_virt_ssbd) {
+		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
+		return;
+	}
+
+	if (!set_legacy_ssbd(c, opt_ssbd)) {
 		printk_once(XENLOG_ERR "No SSBD controls available\n");
+		if (amd_legacy_ssbd)
+			panic("CPU feature mismatch: no legacy SSBD\n");
+	} else if (c == &boot_cpu_data)
+		amd_legacy_ssbd = true;
+}
+
+static struct ssbd_ls_cfg {
+    bool locked;
+    unsigned int count;
+} __cacheline_aligned *ssbd_ls_cfg;
+static unsigned int __ro_after_init ssbd_max_cores;
+#define AMD_FAM17H_MAX_SOCKETS 2
+
+bool __init amd_setup_legacy_ssbd(void)
+{
+	unsigned int i;
+
+	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
+	    boot_cpu_data.x86_num_siblings <= 1)
+		return true;
+
+	/*
+	 * One could be forgiven for thinking that c->x86_max_cores is the
+	 * correct value to use here.
+	 *
+	 * However, that value is derived from the current configuration, and
+	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
+	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
+	 */
+	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
+		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
+		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
+	}
+	if (!ssbd_max_cores)
+		return false;
+
+	ssbd_ls_cfg = xzalloc_array(struct ssbd_ls_cfg,
+	                            ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS);
+	if (!ssbd_ls_cfg)
+		return false;
+
+	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
+		/* Record initial state, also applies to any hotplug CPU. */
+		if (opt_ssbd)
+			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
+
+	return true;
+}
+
+void amd_set_legacy_ssbd(bool enable)
+{
+	const struct cpuinfo_x86 *c = &current_cpu_data;
+	struct ssbd_ls_cfg *status;
+
+	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
+		BUG_ON(!set_legacy_ssbd(c, enable));
+		return;
+	}
+
+	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
+	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
+	                      c->cpu_core_id];
+
+	/*
+	 * Open code a very simple spinlock: this function is used with GIF==0
+	 * and different IF values, so would trigger the checklock detector.
+	 * Instead of trying to workaround the detector, use a very simple lock
+	 * implementation: it's better to reduce the amount of code executed
+	 * with GIF==0.
+	 */
+	while ( test_and_set_bool(status->locked) )
+	    cpu_relax();
+	status->count += enable ? 1 : -1;
+	ASSERT(status->count <= c->x86_num_siblings);
+	if (enable ? status->count == 1 : !status->count)
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	barrier();
+	write_atomic(&status->locked, false);
 }
 
 void __init detect_zen2_null_seg_behaviour(void)
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 9a8c73f067..7607155875 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -544,6 +544,16 @@ static void __init calculate_hvm_max_policy(void)
     if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
         /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
         __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+    else
+        /*
+         * Expose VIRT_SSBD if VIRT_SPEC_CTRL is supported, as that implies the
+         * underlying hardware is capable of setting SSBD using
+         * non-architectural way or VIRT_SSBD is available.
+         *
+         * Note that if the hardware supports VIRT_SSBD natively this setting
+         * will just override an already set bit.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index e15c9754d7..cee11bb244 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -3126,6 +3126,8 @@ void vmexit_virt_spec_ctrl(void)
 
     if ( cpu_has_virt_ssbd )
         wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+    else
+         amd_set_legacy_ssbd(opt_ssbd);
 }
 
 /* Called with GIF=0. */
@@ -3138,6 +3140,9 @@ void vmentry_virt_spec_ctrl(void)
 
     if ( cpu_has_virt_ssbd )
         wrmsr(MSR_VIRT_SPEC_CTRL, current->arch.msrs->virt_spec_ctrl.raw, 0);
+    else
+        amd_set_legacy_ssbd(current->arch.msrs->virt_spec_ctrl.raw &
+                            SPEC_CTRL_SSBD);
 }
 
 /*
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a82382e6bf..6a42f68542 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -151,4 +151,8 @@ void check_enable_amd_mmconf_dmi(void);
 extern bool amd_acpi_c1e_quirk;
 void amd_check_disable_c1e(unsigned int port, u8 value);
 
+extern bool amd_legacy_ssbd;
+bool amd_setup_legacy_ssbd(void);
+void amd_set_legacy_ssbd(bool enable);
+
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 0d5ec877d1..495e6f9405 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -22,6 +22,7 @@
 #include <xen/param.h>
 #include <xen/warning.h>
 
+#include <asm/amd.h>
 #include <asm/hvm/svm/svm.h>
 #include <asm/microcode.h>
 #include <asm/msr.h>
@@ -1073,7 +1074,8 @@ void __init init_speculation_mitigations(void)
     }
 
     /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
-    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd )
+    if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd &&
+         (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) )
         setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
 
     /* If we have IBRS available, see whether we should use it. */
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-04-27 10:47 ` [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-04-28 16:11   ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-04-28 16:11 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 27.04.2022 12:47, Roger Pau Monne wrote:
> Use the logic to set shadow SPEC_CTRL values in order to implement
> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> guests. This includes using the spec_ctrl vCPU MSR variable to store
> the guest set value of VIRT_SPEC_CTRL.SSBD, which will be OR'ed with
> any SPEC_CTRL values being set by the guest.
> 
> On hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
> default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
> policy so it can be enabled for compatibility purposes.
> 
> Use '!' to annotate the feature in order to express that the presence
> of the bit is not directly tied to its value in the host policy.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-04-27 10:47 ` [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-04-29 10:35   ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-04-29 10:35 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 27.04.2022 12:47, Roger Pau Monne wrote:
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -541,6 +541,10 @@ static void __init calculate_hvm_max_policy(void)
>           raw_cpuid_policy.basic.sep )
>          __set_bit(X86_FEATURE_SEP, hvm_featureset);
>  
> +    if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
> +        /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
> +        __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> +
>      /*
>       * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
>       * availability, or admin choice), hide the feature.
> @@ -597,6 +601,13 @@ static void __init calculate_hvm_def_policy(void)
>      guest_common_feature_adjustments(hvm_featureset);
>      guest_common_default_feature_adjustments(hvm_featureset);
>  
> +    /*
> +     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
> +     * VIRT_SC_MSR_HVM is set.
> +     */
> +    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
> +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);

The earlier patch sets the bit in "max" when SC_MSR_HVM && AMD_SSBD.
This patch doesn't set the bit in "max" at all (it only clears it in
one case as per the earlier hunk), thus leading to "def" holding a
set bit which supposedly cannot be set. Irrespective of the feature's
'!' annotation I think we'd better not violate "max" >= "def".

Everything else in this patch looks good to me.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-27 10:47 ` [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-04-29 10:59   ` Jan Beulich
  2022-04-29 15:49     ` Roger Pau Monné
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2022-04-29 10:59 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 27.04.2022 12:47, Roger Pau Monne wrote:> Changes since v3:>  - Align ssbd per-core struct to a cache line.>  - Open code a simple spinlock to avoid playing tricks with the lock>    detector.>  - s/ssbd_core/ssbd_ls_cfg/.>  - Fix log message wording.>  - Fix define name and remove comment.>  - Also handle Hygon processors (Fam18h).>  - Add changelog entry.
What is this last line about?

> +bool __init amd_setup_legacy_ssbd(void)
> +{
> +	unsigned int i;
> +
> +	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> +	    boot_cpu_data.x86_num_siblings <= 1)
> +		return true;
> +
> +	/*
> +	 * One could be forgiven for thinking that c->x86_max_cores is the
> +	 * correct value to use here.
> +	 *
> +	 * However, that value is derived from the current configuration, and
> +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> +	 */
> +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> +	}
> +	if (!ssbd_max_cores)
> +		return false;
> +
> +	ssbd_ls_cfg = xzalloc_array(struct ssbd_ls_cfg,
> +	                            ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS);
> +	if (!ssbd_ls_cfg)
> +		return false;
> +
> +	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
> +		/* Record initial state, also applies to any hotplug CPU. */
> +		if (opt_ssbd)
> +			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;

Perhaps flip if() and for()?

> +void amd_set_legacy_ssbd(bool enable)
> +{
> +	const struct cpuinfo_x86 *c = &current_cpu_data;
> +	struct ssbd_ls_cfg *status;
> +
> +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
> +		BUG_ON(!set_legacy_ssbd(c, enable));
> +		return;
> +	}
> +
> +	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
> +	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
> +	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
> +	                      c->cpu_core_id];
> +
> +	/*
> +	 * Open code a very simple spinlock: this function is used with GIF==0
> +	 * and different IF values, so would trigger the checklock detector.
> +	 * Instead of trying to workaround the detector, use a very simple lock
> +	 * implementation: it's better to reduce the amount of code executed
> +	 * with GIF==0.
> +	 */
> +	while ( test_and_set_bool(status->locked) )
> +	    cpu_relax();
> +	status->count += enable ? 1 : -1;
> +	ASSERT(status->count <= c->x86_num_siblings);
> +	if (enable ? status->count == 1 : !status->count)
> +		BUG_ON(!set_legacy_ssbd(c, enable));

What are the effects of ASSERT() or BUG_ON() triggering in a GIF=0
region?

> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -544,6 +544,16 @@ static void __init calculate_hvm_max_policy(void)
>      if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
>          /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
>          __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> +    else
> +        /*
> +         * Expose VIRT_SSBD if VIRT_SPEC_CTRL is supported, as that implies the
> +         * underlying hardware is capable of setting SSBD using
> +         * non-architectural way or VIRT_SSBD is available.
> +         *
> +         * Note that if the hardware supports VIRT_SSBD natively this setting
> +         * will just override an already set bit.
> +         */
> +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);

With the 's' annotation gone from the public header, is this last
sentence of the comment actually true? Aiui code near the top of
the function would have zapped the bit from hvm_featureset[].

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -3126,6 +3126,8 @@ void vmexit_virt_spec_ctrl(void)
>  
>      if ( cpu_has_virt_ssbd )
>          wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
> +    else
> +         amd_set_legacy_ssbd(opt_ssbd);

Nit: Indentation is off by one here. Of course this alone could
easily be adjusted while committing.

> @@ -3138,6 +3140,9 @@ void vmentry_virt_spec_ctrl(void)
>  
>      if ( cpu_has_virt_ssbd )
>          wrmsr(MSR_VIRT_SPEC_CTRL, current->arch.msrs->virt_spec_ctrl.raw, 0);
> +    else
> +        amd_set_legacy_ssbd(current->arch.msrs->virt_spec_ctrl.raw &
> +                            SPEC_CTRL_SSBD);

Would seem cheaper to use !val here (and then val for symmetry in
the other function).

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-29 10:59   ` Jan Beulich
@ 2022-04-29 15:49     ` Roger Pau Monné
  2022-04-29 16:11       ` Roger Pau Monné
  2022-05-02  6:11       ` Jan Beulich
  0 siblings, 2 replies; 11+ messages in thread
From: Roger Pau Monné @ 2022-04-29 15:49 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Fri, Apr 29, 2022 at 12:59:58PM +0200, Jan Beulich wrote:
> On 27.04.2022 12:47, Roger Pau Monne wrote:> Changes since v3:>  - Align ssbd per-core struct to a cache line.>  - Open code a simple spinlock to avoid playing tricks with the lock>    detector.>  - s/ssbd_core/ssbd_ls_cfg/.>  - Fix log message wording.>  - Fix define name and remove comment.>  - Also handle Hygon processors (Fam18h).>  - Add changelog entry.
> What is this last line about?

Hm, seems like I forgot to do a patch refresh... So new version will
have an entry about adding VIRT_SSBD support to HVM guests. Sorry for
spoiling the surprise.

> > +bool __init amd_setup_legacy_ssbd(void)
> > +{
> > +	unsigned int i;
> > +
> > +	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> > +	    boot_cpu_data.x86_num_siblings <= 1)
> > +		return true;
> > +
> > +	/*
> > +	 * One could be forgiven for thinking that c->x86_max_cores is the
> > +	 * correct value to use here.
> > +	 *
> > +	 * However, that value is derived from the current configuration, and
> > +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> > +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> > +	 */
> > +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> > +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> > +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> > +	}
> > +	if (!ssbd_max_cores)
> > +		return false;
> > +
> > +	ssbd_ls_cfg = xzalloc_array(struct ssbd_ls_cfg,
> > +	                            ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS);
> > +	if (!ssbd_ls_cfg)
> > +		return false;
> > +
> > +	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
> > +		/* Record initial state, also applies to any hotplug CPU. */
> > +		if (opt_ssbd)
> > +			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
> 
> Perhaps flip if() and for()?

Indeed, thanks.

> > +void amd_set_legacy_ssbd(bool enable)
> > +{
> > +	const struct cpuinfo_x86 *c = &current_cpu_data;
> > +	struct ssbd_ls_cfg *status;
> > +
> > +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
> > +		BUG_ON(!set_legacy_ssbd(c, enable));
> > +		return;
> > +	}
> > +
> > +	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
> > +	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
> > +	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
> > +	                      c->cpu_core_id];
> > +
> > +	/*
> > +	 * Open code a very simple spinlock: this function is used with GIF==0
> > +	 * and different IF values, so would trigger the checklock detector.
> > +	 * Instead of trying to workaround the detector, use a very simple lock
> > +	 * implementation: it's better to reduce the amount of code executed
> > +	 * with GIF==0.
> > +	 */
> > +	while ( test_and_set_bool(status->locked) )
> > +	    cpu_relax();
> > +	status->count += enable ? 1 : -1;
> > +	ASSERT(status->count <= c->x86_num_siblings);
> > +	if (enable ? status->count == 1 : !status->count)
> > +		BUG_ON(!set_legacy_ssbd(c, enable));
> 
> What are the effects of ASSERT() or BUG_ON() triggering in a GIF=0
> region?

So AFAICT the BUG itself works, the usage of a crash kernel however
won't work as it's booted with GIF==0.

Maybe we need to issue an stgi as part of BUG_FRAME if required?
(maybe that's too naive...)

> > --- a/xen/arch/x86/cpuid.c
> > +++ b/xen/arch/x86/cpuid.c
> > @@ -544,6 +544,16 @@ static void __init calculate_hvm_max_policy(void)
> >      if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
> >          /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */
> >          __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> > +    else
> > +        /*
> > +         * Expose VIRT_SSBD if VIRT_SPEC_CTRL is supported, as that implies the
> > +         * underlying hardware is capable of setting SSBD using
> > +         * non-architectural way or VIRT_SSBD is available.
> > +         *
> > +         * Note that if the hardware supports VIRT_SSBD natively this setting
> > +         * will just override an already set bit.
> > +         */
> > +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> 
> With the 's' annotation gone from the public header, is this last
> sentence of the comment actually true? Aiui code near the top of
> the function would have zapped the bit from hvm_featureset[].

This comment is now gone, and there are no changes to
calculate_hvm_max_policy in this patch anymore.

> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -3126,6 +3126,8 @@ void vmexit_virt_spec_ctrl(void)
> >  
> >      if ( cpu_has_virt_ssbd )
> >          wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
> > +    else
> > +         amd_set_legacy_ssbd(opt_ssbd);
> 
> Nit: Indentation is off by one here. Of course this alone could
> easily be adjusted while committing.
> 
> > @@ -3138,6 +3140,9 @@ void vmentry_virt_spec_ctrl(void)
> >  
> >      if ( cpu_has_virt_ssbd )
> >          wrmsr(MSR_VIRT_SPEC_CTRL, current->arch.msrs->virt_spec_ctrl.raw, 0);
> > +    else
> > +        amd_set_legacy_ssbd(current->arch.msrs->virt_spec_ctrl.raw &
> > +                            SPEC_CTRL_SSBD);
> 
> Would seem cheaper to use !val here (and then val for symmetry in
> the other function).

I could even use !opt_ssbd, and that would be more similar to what's
done in vmexit_virt_spec_ctrl?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-29 15:49     ` Roger Pau Monné
@ 2022-04-29 16:11       ` Roger Pau Monné
  2022-05-02  6:13         ` Jan Beulich
  2022-05-02  6:11       ` Jan Beulich
  1 sibling, 1 reply; 11+ messages in thread
From: Roger Pau Monné @ 2022-04-29 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Wei Liu, xen-devel

On Fri, Apr 29, 2022 at 05:49:42PM +0200, Roger Pau Monné wrote:
> On Fri, Apr 29, 2022 at 12:59:58PM +0200, Jan Beulich wrote:
> > On 27.04.2022 12:47, Roger Pau Monne wrote:> Changes since v3:>  - Align ssbd per-core struct to a cache line.>  - Open code a simple spinlock to avoid playing tricks with the lock>    detector.>  - s/ssbd_core/ssbd_ls_cfg/.>  - Fix log message wording.>  - Fix define name and remove comment.>  - Also handle Hygon processors (Fam18h).>  - Add changelog entry.
> > What is this last line about?
> 
> Hm, seems like I forgot to do a patch refresh... So new version will
> have an entry about adding VIRT_SSBD support to HVM guests. Sorry for
> spoiling the surprise.
> 
> > > +bool __init amd_setup_legacy_ssbd(void)
> > > +{
> > > +	unsigned int i;
> > > +
> > > +	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> > > +	    boot_cpu_data.x86_num_siblings <= 1)
> > > +		return true;
> > > +
> > > +	/*
> > > +	 * One could be forgiven for thinking that c->x86_max_cores is the
> > > +	 * correct value to use here.
> > > +	 *
> > > +	 * However, that value is derived from the current configuration, and
> > > +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> > > +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> > > +	 */
> > > +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> > > +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> > > +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
> > > +	}
> > > +	if (!ssbd_max_cores)
> > > +		return false;
> > > +
> > > +	ssbd_ls_cfg = xzalloc_array(struct ssbd_ls_cfg,
> > > +	                            ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS);
> > > +	if (!ssbd_ls_cfg)
> > > +		return false;
> > > +
> > > +	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
> > > +		/* Record initial state, also applies to any hotplug CPU. */
> > > +		if (opt_ssbd)
> > > +			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
> > 
> > Perhaps flip if() and for()?
> 
> Indeed, thanks.
> 
> > > +void amd_set_legacy_ssbd(bool enable)
> > > +{
> > > +	const struct cpuinfo_x86 *c = &current_cpu_data;
> > > +	struct ssbd_ls_cfg *status;
> > > +
> > > +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
> > > +		BUG_ON(!set_legacy_ssbd(c, enable));
> > > +		return;
> > > +	}
> > > +
> > > +	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
> > > +	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
> > > +	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
> > > +	                      c->cpu_core_id];
> > > +
> > > +	/*
> > > +	 * Open code a very simple spinlock: this function is used with GIF==0
> > > +	 * and different IF values, so would trigger the checklock detector.
> > > +	 * Instead of trying to workaround the detector, use a very simple lock
> > > +	 * implementation: it's better to reduce the amount of code executed
> > > +	 * with GIF==0.
> > > +	 */
> > > +	while ( test_and_set_bool(status->locked) )
> > > +	    cpu_relax();
> > > +	status->count += enable ? 1 : -1;
> > > +	ASSERT(status->count <= c->x86_num_siblings);
> > > +	if (enable ? status->count == 1 : !status->count)
> > > +		BUG_ON(!set_legacy_ssbd(c, enable));
> > 
> > What are the effects of ASSERT() or BUG_ON() triggering in a GIF=0
> > region?
> 
> So AFAICT the BUG itself works, the usage of a crash kernel however
> won't work as it's booted with GIF==0.
> 
> Maybe we need to issue an stgi as part of BUG_FRAME if required?
> (maybe that's too naive...)

Well, better in panic() or kexec_crash() likely.

Roger.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-29 15:49     ` Roger Pau Monné
  2022-04-29 16:11       ` Roger Pau Monné
@ 2022-05-02  6:11       ` Jan Beulich
  1 sibling, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-05-02  6:11 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 29.04.2022 17:49, Roger Pau Monné wrote:
> On Fri, Apr 29, 2022 at 12:59:58PM +0200, Jan Beulich wrote:
>> On 27.04.2022 12:47, Roger Pau Monne wrote:
>>> @@ -3138,6 +3140,9 @@ void vmentry_virt_spec_ctrl(void)
>>>  
>>>      if ( cpu_has_virt_ssbd )
>>>          wrmsr(MSR_VIRT_SPEC_CTRL, current->arch.msrs->virt_spec_ctrl.raw, 0);
>>> +    else
>>> +        amd_set_legacy_ssbd(current->arch.msrs->virt_spec_ctrl.raw &
>>> +                            SPEC_CTRL_SSBD);
>>
>> Would seem cheaper to use !val here (and then val for symmetry in
>> the other function).
> 
> I could even use !opt_ssbd, and that would be more similar to what's
> done in vmexit_virt_spec_ctrl?

Might be an option, yet when using "val" there as well I don't see
benefit similarity-wise. Using a local variable would imo still be
cheaper than accessing a global one. But that's a matter of taste
to a fair degree, so I'll leave it to you - all I'd really prefer
to have is the functions using as similar / symmetric as possible
conditions.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-04-29 16:11       ` Roger Pau Monné
@ 2022-05-02  6:13         ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2022-05-02  6:13 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 29.04.2022 18:11, Roger Pau Monné wrote:
> On Fri, Apr 29, 2022 at 05:49:42PM +0200, Roger Pau Monné wrote:
>> On Fri, Apr 29, 2022 at 12:59:58PM +0200, Jan Beulich wrote:
>>> On 27.04.2022 12:47, Roger Pau Monne wrote:
>>>> +void amd_set_legacy_ssbd(bool enable)
>>>> +{
>>>> +	const struct cpuinfo_x86 *c = &current_cpu_data;
>>>> +	struct ssbd_ls_cfg *status;
>>>> +
>>>> +	if (c->x86 != 0x17 || c->x86_num_siblings <= 1) {
>>>> +		BUG_ON(!set_legacy_ssbd(c, enable));
>>>> +		return;
>>>> +	}
>>>> +
>>>> +	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
>>>> +	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
>>>> +	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
>>>> +	                      c->cpu_core_id];
>>>> +
>>>> +	/*
>>>> +	 * Open code a very simple spinlock: this function is used with GIF==0
>>>> +	 * and different IF values, so would trigger the checklock detector.
>>>> +	 * Instead of trying to workaround the detector, use a very simple lock
>>>> +	 * implementation: it's better to reduce the amount of code executed
>>>> +	 * with GIF==0.
>>>> +	 */
>>>> +	while ( test_and_set_bool(status->locked) )
>>>> +	    cpu_relax();
>>>> +	status->count += enable ? 1 : -1;
>>>> +	ASSERT(status->count <= c->x86_num_siblings);
>>>> +	if (enable ? status->count == 1 : !status->count)
>>>> +		BUG_ON(!set_legacy_ssbd(c, enable));
>>>
>>> What are the effects of ASSERT() or BUG_ON() triggering in a GIF=0
>>> region?
>>
>> So AFAICT the BUG itself works, the usage of a crash kernel however
>> won't work as it's booted with GIF==0.
>>
>> Maybe we need to issue an stgi as part of BUG_FRAME if required?
>> (maybe that's too naive...)
> 
> Well, better in panic() or kexec_crash() likely.

Yeah, lifting it too early may be detrimental.

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-05-02  6:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-27 10:47 [PATCH v4 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-04-27 10:47 ` [PATCH v4 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
2022-04-28 16:11   ` Jan Beulich
2022-04-27 10:47 ` [PATCH v4 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-04-29 10:35   ` Jan Beulich
2022-04-27 10:47 ` [PATCH v4 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
2022-04-29 10:59   ` Jan Beulich
2022-04-29 15:49     ` Roger Pau Monné
2022-04-29 16:11       ` Roger Pau Monné
2022-05-02  6:13         ` Jan Beulich
2022-05-02  6:11       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.