All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
@ 2022-02-01 16:46 Roger Pau Monne
  2022-02-01 16:46 ` [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Roger Pau Monne @ 2022-02-01 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

Hello,

The following series implements support for MSR_VIRT_SPEC_CTRL on
different AMD CPU families.

Note that the support is added backwards, starting with the newer CPUs
that support MSR_SPEC_CTRL and moving to the older ones either using
MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.

First patch is quite clean, as it uses the shadow SPEC_CTRL in order to
set the SSBD bit and have it context switched by Xen using the existing
logic recently added.

The next two patches introduce a different way to context switch SSBD
either depending on the underlying SSBD support, so it's either using
VIRT_SPEC_CTRL or the LS_CFG MSR. They also kind of overload the usage of
several spec_ctrl variables in the hypervisor in order to store the
status of SSBD even when not using MSR_SPEC_CTRL itself. I've tried to
document those in the commit messages, but it could be controversial.

Thanks, Roger.

Roger Pau Monne (3):
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD

 docs/misc/xen-command-line.pandoc           |   5 +-
 xen/arch/x86/cpu/amd.c                      | 116 +++++++++++++++++---
 xen/arch/x86/cpuid.c                        |  18 +++
 xen/arch/x86/hvm/hvm.c                      |   1 +
 xen/arch/x86/hvm/svm/entry.S                |   8 +-
 xen/arch/x86/hvm/svm/svm.c                  |  67 +++++++++++
 xen/arch/x86/include/asm/amd.h              |   3 +
 xen/arch/x86/include/asm/cpufeatures.h      |   2 +
 xen/arch/x86/include/asm/msr.h              |   6 +-
 xen/arch/x86/msr.c                          |  15 +++
 xen/arch/x86/spec_ctrl.c                    |  11 +-
 xen/include/public/arch-x86/cpufeatureset.h |   2 +-
 12 files changed, 230 insertions(+), 24 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-02-01 16:46 [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-02-01 16:46 ` Roger Pau Monne
  2022-02-14 15:07   ` Jan Beulich
  2022-02-01 16:46 ` [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monne @ 2022-02-01 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

Use the logic to set shadow SPEC_CTRL values in order to implement
support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
guests. This includes using the spec_ctrl vCPU MSR variable to store
the guest set value of VIRT_SPEC_CTRL.SSBD.

Note that VIRT_SSBD is only set in the HVM max CPUID policy, as the
default should be to expose SPEC_CTRL only and support VIRT_SPEC_CTRL
for migration compatibility.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.pandoc           |  5 +++--
 xen/arch/x86/cpuid.c                        |  7 +++++++
 xen/arch/x86/hvm/hvm.c                      |  1 +
 xen/arch/x86/include/asm/msr.h              |  6 +++++-
 xen/arch/x86/msr.c                          | 15 +++++++++++++++
 xen/arch/x86/spec_ctrl.c                    |  3 ++-
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 7 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 6b3da6ddc1..081e10f80b 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2273,8 +2273,9 @@ to use.
 * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
   respectively.
 * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
-  on entry and exit.  These blocks are necessary to virtualise support for
-  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
+  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to
+  virtualise support for guests and if disabled, guests will be unable to use
+  IBRS/STIBP/SSBD/etc.
 * `rsb=` offers control over whether to overwrite the Return Stack Buffer /
   Return Address Stack on entry to Xen.
 * `md-clear=` offers control over whether to use VERW to flush
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index e24dd283e7..29b4cfc9e6 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
         __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
         __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
     }
+    else
+        /*
+         * If SPEC_CTRL is available VIRT_SPEC_CTRL can also be implemented as
+         * it's a subset of the controls exposed in SPEC_CTRL (SSBD only).
+         * Expose in the max policy for compatibility migration.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
      * With VT-x, some features are only supported by Xen if dedicated
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c4ddb8607d..3400c9299c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1332,6 +1332,7 @@ static const uint32_t msrs_to_send[] = {
     MSR_INTEL_MISC_FEATURES_ENABLES,
     MSR_IA32_BNDCFGS,
     MSR_IA32_XSS,
+    MSR_VIRT_SPEC_CTRL,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index ce4fe51afe..98f6b79e09 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -291,6 +291,7 @@ struct vcpu_msrs
 {
     /*
      * 0x00000048 - MSR_SPEC_CTRL
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL
      *
      * For PV guests, this holds the guest kernel value.  It is accessed on
      * every entry/exit path.
@@ -301,7 +302,10 @@ struct vcpu_msrs
      * For SVM, the guest value lives in the VMCB, and hardware saves/restores
      * the host value automatically.  However, guests run with the OR of the
      * host and guest value, which allows Xen to set protections behind the
-     * guest's back.
+     * guest's back.  Use such functionality in order to implement support for
+     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
+     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
+     * compatible layouts.
      *
      * We must clear/restore Xen's value before/after VMRUN to avoid unduly
      * influencing the guest.  In order to support "behind the guest's back"
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 4ac5b5a048..aa74cfde6c 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -381,6 +381,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
                ? K8_HWCR_TSC_FREQ_SEL : 0;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        *val = msrs->spec_ctrl.raw & SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
@@ -666,6 +673,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_ssbd )
+            goto gp_fault;
+
+        /* Only supports SSBD bit, the rest are ignored. */
+        msrs->spec_ctrl.raw = val & SPEC_CTRL_SSBD;
+        break;
+
     case MSR_AMD64_DE_CFG:
         /*
          * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index ee862089b7..64b154b2d3 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -395,12 +395,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
      * mitigation support for guests.
      */
 #ifdef CONFIG_HVM
-    printk("  Support for HVM VMs:%s%s%s%s%s\n",
+    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
+           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 957df23b65..b9ab878ec1 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
 XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
+XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
 XEN_CPUFEATURE(SSB_NO,        8*32+26) /*A  Hardware not vulnerable to SSB */
 XEN_CPUFEATURE(PSFD,          8*32+28) /*S  MSR_SPEC_CTRL.PSFD */
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-02-01 16:46 [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-02-01 16:46 ` [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-02-01 16:46 ` Roger Pau Monne
  2022-02-14 16:02   ` Jan Beulich
  2022-02-01 16:46 ` [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
  2022-02-14 20:46 ` [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
  3 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monne @ 2022-02-01 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
hardware has support for it. This requires adding logic in the
vm{entry,exit} paths for SVM in order to context switch between the
hypervisor value and the guest one. The added handlers for context
switch will also be used for the legacy SSBD support.

Note that the implementation relies on storing the guest value in the
spec_ctrl MSR per-vCPU variable, as the usage of VIRT_SPEC_CTRL
precludes the usage of SPEC_CTRL. Also store the current and
hypervisor states of VIRT_SPEC_CTRL in the per-pCPU spec_ctrl fields
at cpu_info in order to properly context switch the values between
Xen and HVM guests.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/amd.c                 |  7 +++-
 xen/arch/x86/cpuid.c                   | 11 ++++++
 xen/arch/x86/hvm/svm/entry.S           |  8 +++-
 xen/arch/x86/hvm/svm/svm.c             | 55 ++++++++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeatures.h |  1 +
 xen/arch/x86/spec_ctrl.c               |  8 +++-
 6 files changed, 86 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index a8e37dbb1f..c3fcc0e558 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -687,6 +687,7 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
  */
 void amd_init_ssbd(const struct cpuinfo_x86 *c)
 {
+	struct cpu_info *info = get_cpu_info();
 	int bit = -1;
 
 	if (cpu_has_ssb_no)
@@ -699,7 +700,7 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 
 	if (cpu_has_virt_ssbd) {
 		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
-		return;
+		goto out;
 	}
 
 	switch (c->x86) {
@@ -729,6 +730,10 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 
 	if (bit < 0)
 		printk_once(XENLOG_ERR "No SSBD controls available\n");
+
+ out:
+	info->last_spec_ctrl = info->xen_spec_ctrl = opt_ssbd ? SPEC_CTRL_SSBD
+							      : 0;
 }
 
 void __init detect_zen2_null_seg_behaviour(void)
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 29b4cfc9e6..7b10fbf12f 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -551,6 +551,9 @@ static void __init calculate_hvm_max_policy(void)
          */
         __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
+    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     /*
      * With VT-x, some features are only supported by Xen if dedicated
      * hardware support is also available.
@@ -590,6 +593,14 @@ static void __init calculate_hvm_def_policy(void)
     guest_common_feature_adjustments(hvm_featureset);
     guest_common_default_feature_adjustments(hvm_featureset);
 
+    /*
+     * Only expose VIRT_SPEC_CTRL support by default if SPEC_CTRL is not
+     * supported.
+     */
+    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) &&
+         !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
+        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
+
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 4ae55a2ef6..2a0c41625b 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -71,7 +71,9 @@ __UNLIKELY_END(nsvm_hap)
             mov    %al, CPUINFO_last_spec_ctrl(%rsp)
 1:          /* No Spectre v1 concerns.  Execution will hit VMRUN imminently. */
         .endm
-        ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
+        ALTERNATIVE_2 "", STR(call vmentry_virt_spec_ctrl), \
+                          X86_FEATURE_VIRT_SC_MSR_HVM, \
+                      svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
 
         pop  %r15
         pop  %r14
@@ -111,7 +113,9 @@ __UNLIKELY_END(nsvm_hap)
             wrmsr
             mov    %al, CPUINFO_last_spec_ctrl(%rsp)
         .endm
-        ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
+        ALTERNATIVE_2 "", STR(call vmexit_virt_spec_ctrl), \
+                          X86_FEATURE_VIRT_SC_MSR_HVM, \
+                      svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         stgi
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index c4ce3f75ab..56c7b30b32 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -610,6 +610,14 @@ static void svm_cpuid_policy_changed(struct vcpu *v)
     svm_intercept_msr(v, MSR_SPEC_CTRL,
                       cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
 
+    /*
+     * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it
+     * and the hardware implements it.
+     */
+    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
+                      cp->extd.virt_ssbd && cpu_has_virt_ssbd ?
+                      MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
     svm_intercept_msr(v, MSR_PRED_CMD,
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
@@ -3099,6 +3107,53 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
+/* Called with GIF=0. */
+void vmexit_virt_spec_ctrl(void)
+{
+    struct cpu_info *info = get_cpu_info();
+    unsigned int val = info->xen_spec_ctrl;
+
+    /*
+     * On AMD we will never use MSR_SPEC_CTRL together with MSR_VIRT_SPEC_CTRL
+     * or any legacy way of setting SSBD, so reuse the spec_ctrl fields in
+     * cpu_info for context switching the other means of setting SSBD.
+     */
+    ASSERT(!boot_cpu_has(X86_FEATURE_SC_MSR_HVM));
+    if ( cpu_has_virt_ssbd )
+    {
+        unsigned int lo, hi;
+        struct vcpu *curr = current;
+
+        /*
+         * Need to read from the hardware because VIRT_SPEC_CTRL is not context
+         * switched by the hardware, and we allow the guest untrapped access to
+         * the register.
+         */
+        rdmsr(MSR_VIRT_SPEC_CTRL, lo, hi);
+        if ( val != lo )
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        curr->arch.msrs->spec_ctrl.raw = lo;
+        info->last_spec_ctrl = val;
+    }
+}
+
+/* Called with GIF=0. */
+void vmentry_virt_spec_ctrl(void)
+{
+    struct cpu_info *info = get_cpu_info();
+    const struct vcpu *curr = current;
+    unsigned int val = curr->arch.msrs->spec_ctrl.raw;
+
+    ASSERT(!boot_cpu_has(X86_FEATURE_SC_MSR_HVM));
+    if ( val != info->last_spec_ctrl )
+    {
+        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        info->last_spec_ctrl = val;
+    }
+
+    /* No Spectre v1 concerns.  Execution is going to hit VMRUN imminently. */
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index b10154fc44..a2c37bfdd4 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -39,6 +39,7 @@ XEN_CPUFEATURE(SC_VERW_PV,        X86_SYNTH(23)) /* VERW used by Xen for PV */
 XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
+XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(27)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 64b154b2d3..2c46e1485f 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -399,9 +399,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ||
             opt_eager_fpu)                           ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
-           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
+           (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
+            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL"
+                                                       : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
@@ -1053,6 +1056,9 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
+    if ( opt_msr_sc_hvm && cpu_has_virt_ssbd )
+        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
+
     /* If we have IBRS available, see whether we should use it. */
     if ( has_spec_ctrl && ibrs )
         default_xen_spec_ctrl |= SPEC_CTRL_IBRS;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-02-01 16:46 [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
  2022-02-01 16:46 ` [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
  2022-02-01 16:46 ` [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-02-01 16:46 ` Roger Pau Monne
  2022-02-14 16:44   ` Jan Beulich
  2022-02-14 20:46 ` [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper
  3 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monne @ 2022-02-01 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Expose VIRT_SPEC_CTRL to guests if the hardware supports setting SSBD
in the LS_CFG MSR. Different AMD CPU families use different bits in
LS_CFG, so exposing VIRT_SPEC_CTRL.SSBD allows for an unified way of
setting SSBD on AMD hardware that's compatible migration wise,
regardless of what underlying mechanism is used to set SSBD.

Note that on AMD Family 17h (Zen 1) the value of SSBD in LS_CFG is
shared between threads on the same core, so there's extra logic in
order to synchronize the value and have SSBD set as long as one of the
threads in the core requires it to be set. Such logic also requires
extra storage for each thread state, which is allocated at
initialization time.

Do the context switching of the SSBD selection in LS_CFG between
hypervisor and guest in the same handler that's already used to switch
the value of VIRT_SPEC_CTRL in the hardware when supported.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/amd.c                 | 113 +++++++++++++++++++++----
 xen/arch/x86/hvm/svm/svm.c             |  14 ++-
 xen/arch/x86/include/asm/amd.h         |   3 +
 xen/arch/x86/include/asm/cpufeatures.h |   1 +
 xen/arch/x86/spec_ctrl.c               |   4 +-
 5 files changed, 115 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index c3fcc0e558..7318623874 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -685,24 +685,10 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
  * Refer to the AMD Speculative Store Bypass whitepaper:
  * https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
  */
-void amd_init_ssbd(const struct cpuinfo_x86 *c)
+static bool set_legacy_ssbd(const struct cpuinfo_x86 *c, bool enable)
 {
-	struct cpu_info *info = get_cpu_info();
 	int bit = -1;
 
-	if (cpu_has_ssb_no)
-		return;
-
-	if (cpu_has_amd_ssbd) {
-		/* Handled by common MSR_SPEC_CTRL logic */
-		return;
-	}
-
-	if (cpu_has_virt_ssbd) {
-		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
-		goto out;
-	}
-
 	switch (c->x86) {
 	case 0x15: bit = 54; break;
 	case 0x16: bit = 33; break;
@@ -716,26 +702,117 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 		if (rdmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    val &= ~mask;
-			    if (opt_ssbd)
+			    if (enable)
 				    val |= mask;
 			    false;
 		    }) ||
 		    wrmsr_safe(MSR_AMD64_LS_CFG, val) ||
 		    ({
 			    rdmsrl(MSR_AMD64_LS_CFG, val);
-			    (val & mask) != (opt_ssbd * mask);
+			    (val & mask) != (enable * mask);
 		    }))
 			bit = -1;
 	}
 
-	if (bit < 0)
+	return bit >= 0;
+}
+
+void amd_init_ssbd(const struct cpuinfo_x86 *c)
+{
+	struct cpu_info *info = get_cpu_info();
+
+	if (cpu_has_ssb_no)
+		return;
+
+	if (cpu_has_amd_ssbd) {
+		/* Handled by common MSR_SPEC_CTRL logic */
+		return;
+	}
+
+	if (cpu_has_virt_ssbd) {
+		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
+		goto out;
+	}
+
+	if (!set_legacy_ssbd(c, opt_ssbd)) {
 		printk_once(XENLOG_ERR "No SSBD controls available\n");
+		return;
+	}
+
+	if (!smp_processor_id())
+		setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);
 
  out:
 	info->last_spec_ctrl = info->xen_spec_ctrl = opt_ssbd ? SPEC_CTRL_SSBD
 							      : 0;
 }
 
+static struct ssbd_core {
+    spinlock_t lock;
+    unsigned int count;
+} *ssbd_core;
+static unsigned int __read_mostly ssbd_max_cores;
+
+bool __init amd_setup_legacy_ssbd(void)
+{
+	unsigned int i;
+
+	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings == 1)
+		return true;
+
+	/*
+	 * One could be forgiven for thinking that c->x86_max_cores is the
+	 * correct value to use here.
+	 *
+	 * However, that value is derived from the current configuration, and
+	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
+	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
+	 */
+	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
+		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
+		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;
+	}
+	if (!ssbd_max_cores)
+		return false;
+
+	/* Max is two sockets for Fam17h hardware. */
+	ssbd_core = xzalloc_array(struct ssbd_core, ssbd_max_cores * 2);
+	if (!ssbd_core)
+		return false;
+
+	for (i = 0; i < ssbd_max_cores * 2; i++) {
+		spin_lock_init(&ssbd_core[i].lock);
+		/* Record the current state. */
+		ssbd_core[i].count = opt_ssbd ?
+				     boot_cpu_data.x86_num_siblings : 0;
+	}
+
+	return true;
+}
+
+void amd_set_legacy_ssbd(bool enable)
+{
+	const struct cpuinfo_x86 *c = &current_cpu_data;
+	struct ssbd_core *core;
+	unsigned long flags;
+
+	if (c->x86 != 0x17 || c->x86_num_siblings == 1) {
+		set_legacy_ssbd(c, enable);
+		return;
+	}
+
+	ASSERT(c->phys_proc_id < 2);
+	ASSERT(c->cpu_core_id < ssbd_max_cores);
+	core = &ssbd_core[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id];
+	spin_lock_irqsave(&core->lock, flags);
+	core->count += enable ? 1 : -1;
+	ASSERT(core->count <= c->x86_num_siblings);
+	if ((enable  && core->count == 1) ||
+	    (!enable && core->count == 0))
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&core->lock, flags);
+}
+
 void __init detect_zen2_null_seg_behaviour(void)
 {
 	uint64_t base;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 56c7b30b32..10a5a77ad7 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -3134,6 +3134,15 @@ void vmexit_virt_spec_ctrl(void)
             wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
         curr->arch.msrs->spec_ctrl.raw = lo;
         info->last_spec_ctrl = val;
+
+        return;
+    }
+
+    ASSERT(boot_cpu_has(X86_FEATURE_LEGACY_SSBD));
+    if ( val != info->last_spec_ctrl )
+    {
+        amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
+        info->last_spec_ctrl = val;
     }
 }
 
@@ -3147,7 +3156,10 @@ void vmentry_virt_spec_ctrl(void)
     ASSERT(!boot_cpu_has(X86_FEATURE_SC_MSR_HVM));
     if ( val != info->last_spec_ctrl )
     {
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
+        if ( boot_cpu_has(X86_FEATURE_LEGACY_SSBD) )
+            amd_set_legacy_ssbd(val & SPEC_CTRL_SSBD);
+        else
+            wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
         info->last_spec_ctrl = val;
     }
 
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a82382e6bf..823e2f3bd2 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -151,4 +151,7 @@ void check_enable_amd_mmconf_dmi(void);
 extern bool amd_acpi_c1e_quirk;
 void amd_check_disable_c1e(unsigned int port, u8 value);
 
+bool amd_setup_legacy_ssbd(void);
+void amd_set_legacy_ssbd(bool enable);
+
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index a2c37bfdd4..f12d423fe9 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -40,6 +40,7 @@ XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(27)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
+XEN_CPUFEATURE(LEGACY_SSBD,       X86_SYNTH(28)) /* LS_CFG available for SSBD */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 2c46e1485f..c7f8ec29f4 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -22,6 +22,7 @@
 #include <xen/param.h>
 #include <xen/warning.h>
 
+#include <asm/amd.h>
 #include <asm/hvm/svm/svm.h>
 #include <asm/microcode.h>
 #include <asm/msr.h>
@@ -1056,7 +1057,8 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
-    if ( opt_msr_sc_hvm && cpu_has_virt_ssbd )
+    if ( opt_msr_sc_hvm && (cpu_has_virt_ssbd ||
+         (boot_cpu_has(X86_FEATURE_LEGACY_SSBD) && amd_setup_legacy_ssbd())) )
         setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
 
     /* If we have IBRS available, see whether we should use it. */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-02-01 16:46 ` [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
@ 2022-02-14 15:07   ` Jan Beulich
  2022-03-09 15:03     ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2022-02-14 15:07 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

On 01.02.2022 17:46, Roger Pau Monne wrote:
> Use the logic to set shadow SPEC_CTRL values in order to implement
> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> guests. This includes using the spec_ctrl vCPU MSR variable to store
> the guest set value of VIRT_SPEC_CTRL.SSBD.

This leverages the guest running on the OR of host and guest values,
aiui. If so, this could do with spelling out.

> Note that VIRT_SSBD is only set in the HVM max CPUID policy, as the
> default should be to expose SPEC_CTRL only and support VIRT_SPEC_CTRL
> for migration compatibility.

I'm afraid I don't understand this last statement: How would this be
about migration compatibility? No guest so far can use VIRT_SPEC_CTRL,
and a future guest using it is unlikely to be able to cope with the
MSR "disappearing" during migration.

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -2273,8 +2273,9 @@ to use.
>  * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
>    respectively.
>  * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
> -  on entry and exit.  These blocks are necessary to virtualise support for
> -  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
> +  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to

Why would Xen be manipulating an MSR it only brings into existence for its
guests?

> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
>          __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
>          __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
>      }
> +    else
> +        /*
> +         * If SPEC_CTRL is available VIRT_SPEC_CTRL can also be implemented as
> +         * it's a subset of the controls exposed in SPEC_CTRL (SSBD only).
> +         * Expose in the max policy for compatibility migration.
> +         */
> +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);

This means even Intel guests can use the feature then? I thought it was
meanwhile deemed bad to offer such cross-vendor features?

Additionally, is SPEC_CTRL (i.e. IBRS) availability enough? Don't you
need AMD_SSBD as a prereq (which may want expressing in gen-cpuid.py)?

> --- a/xen/arch/x86/include/asm/msr.h
> +++ b/xen/arch/x86/include/asm/msr.h
> @@ -291,6 +291,7 @@ struct vcpu_msrs
>  {
>      /*
>       * 0x00000048 - MSR_SPEC_CTRL
> +     * 0xc001011f - MSR_VIRT_SPEC_CTRL
>       *
>       * For PV guests, this holds the guest kernel value.  It is accessed on
>       * every entry/exit path.
> @@ -301,7 +302,10 @@ struct vcpu_msrs
>       * For SVM, the guest value lives in the VMCB, and hardware saves/restores
>       * the host value automatically.  However, guests run with the OR of the
>       * host and guest value, which allows Xen to set protections behind the
> -     * guest's back.
> +     * guest's back.  Use such functionality in order to implement support for
> +     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
> +     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
> +     * compatible layouts.

I guess "shadow value" means more like an alternative value, but
(see above) this is about setting for now just one bit behind the
guest's back.

> --- a/xen/arch/x86/spec_ctrl.c
> +++ b/xen/arch/x86/spec_ctrl.c
> @@ -395,12 +395,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>       * mitigation support for guests.
>       */
>  #ifdef CONFIG_HVM
> -    printk("  Support for HVM VMs:%s%s%s%s%s\n",
> +    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
>             (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
>              boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
>              boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
>              opt_eager_fpu)                           ? ""               : " None",
>             boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
> +           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
>             boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
>             opt_eager_fpu                             ? " EAGER_FPU"     : "",
>             boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");

The output getting longish, can the two SC_MSR_HVM dependent items
perhaps be folded, e.g. by making it "MSR_{,VIRT_}SPEC_CTRL"?

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
>  XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
>  XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
>  XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
> -XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
> +XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */

What is the ! intended to cover here? From guest perspective the
MSR acts entirely normally afaict.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-02-01 16:46 ` [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
@ 2022-02-14 16:02   ` Jan Beulich
  2022-03-10 16:41     ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2022-02-14 16:02 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 01.02.2022 17:46, Roger Pau Monne wrote:
> Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the
> hardware has support for it. This requires adding logic in the
> vm{entry,exit} paths for SVM in order to context switch between the
> hypervisor value and the guest one. The added handlers for context
> switch will also be used for the legacy SSBD support.

So by "hardware" you mean virtual hardware here, when we run
virtualized ourselves? While the wording in AMD's whitepaper suggests
hardware could exist with both MSRs implemented, so far it was my
understanding that VIRT_SPEC_CTRL was rather left for hypervisors to
implement. Maybe I'm wrong with this, in which case some of the
further comments may also be wrong.

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -687,6 +687,7 @@ void amd_init_lfence(struct cpuinfo_x86 *c)
>   */
>  void amd_init_ssbd(const struct cpuinfo_x86 *c)
>  {
> +	struct cpu_info *info = get_cpu_info();
>  	int bit = -1;
>  
>  	if (cpu_has_ssb_no)
> @@ -699,7 +700,7 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
>  
>  	if (cpu_has_virt_ssbd) {
>  		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
> -		return;
> +		goto out;
>  	}
>  
>  	switch (c->x86) {
> @@ -729,6 +730,10 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
>  
>  	if (bit < 0)
>  		printk_once(XENLOG_ERR "No SSBD controls available\n");
> +
> + out:
> +	info->last_spec_ctrl = info->xen_spec_ctrl = opt_ssbd ? SPEC_CTRL_SSBD
> +							      : 0;
>  }

Besides me being uncertain about the placement of these (preferably
the writes would be where the other similar writes are), this re-use
of the values suggests that you mean to prefer VIRT_SPEC_CTRL use
over that of SPEC_CTRL (see below).

Additionally - the value you store isn't necessarily the value you
wrote to the MSR. It only is if you cam here via the "goto out".

> --- a/xen/arch/x86/hvm/svm/entry.S
> +++ b/xen/arch/x86/hvm/svm/entry.S
> @@ -71,7 +71,9 @@ __UNLIKELY_END(nsvm_hap)
>              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
>  1:          /* No Spectre v1 concerns.  Execution will hit VMRUN imminently. */
>          .endm
> -        ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> +        ALTERNATIVE_2 "", STR(call vmentry_virt_spec_ctrl), \

I'm afraid this violates the "ret" part of the warning a few lines up,
while ...

> +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
> +                      svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>  
>          pop  %r15
>          pop  %r14
> @@ -111,7 +113,9 @@ __UNLIKELY_END(nsvm_hap)
>              wrmsr
>              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
>          .endm
> -        ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> +        ALTERNATIVE_2 "", STR(call vmexit_virt_spec_ctrl), \

... this violates ...

> +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
> +                      svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */

... the "ret" part of this warning.

Furthermore, opposite to what the change to amd_init_ssbd() suggests,
the ordering of the alternatives here means you prefer SPEC_CTRL over
VIRT_SPEC_CTRL; see the comment near the top of _apply_alternatives().
Unless I've missed logic guaranteeing that both of the keyed to
features can't be active at the same time.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-02-01 16:46 ` [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-02-14 16:44   ` Jan Beulich
  2022-03-14 15:32     ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2022-02-14 16:44 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 01.02.2022 17:46, Roger Pau Monne wrote:
> @@ -716,26 +702,117 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
>  		if (rdmsr_safe(MSR_AMD64_LS_CFG, val) ||
>  		    ({
>  			    val &= ~mask;
> -			    if (opt_ssbd)
> +			    if (enable)
>  				    val |= mask;
>  			    false;
>  		    }) ||
>  		    wrmsr_safe(MSR_AMD64_LS_CFG, val) ||
>  		    ({
>  			    rdmsrl(MSR_AMD64_LS_CFG, val);
> -			    (val & mask) != (opt_ssbd * mask);
> +			    (val & mask) != (enable * mask);
>  		    }))
>  			bit = -1;
>  	}
>  
> -	if (bit < 0)
> +	return bit >= 0;
> +}
> +
> +void amd_init_ssbd(const struct cpuinfo_x86 *c)
> +{
> +	struct cpu_info *info = get_cpu_info();
> +
> +	if (cpu_has_ssb_no)
> +		return;
> +
> +	if (cpu_has_amd_ssbd) {
> +		/* Handled by common MSR_SPEC_CTRL logic */
> +		return;
> +	}
> +
> +	if (cpu_has_virt_ssbd) {
> +		wrmsrl(MSR_VIRT_SPEC_CTRL, opt_ssbd ? SPEC_CTRL_SSBD : 0);
> +		goto out;
> +	}
> +
> +	if (!set_legacy_ssbd(c, opt_ssbd)) {
>  		printk_once(XENLOG_ERR "No SSBD controls available\n");
> +		return;
> +	}
> +
> +	if (!smp_processor_id())
> +		setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);

I don't think you need a new feature flag here: You only ever use it
with boot_cpu_has() and there's no alternatives patching keyed to it,
so a single global flag will likely do.

>   out:
>  	info->last_spec_ctrl = info->xen_spec_ctrl = opt_ssbd ? SPEC_CTRL_SSBD
>  							      : 0;
>  }
>  
> +static struct ssbd_core {
> +    spinlock_t lock;
> +    unsigned int count;
> +} *ssbd_core;
> +static unsigned int __read_mostly ssbd_max_cores;

__ro_after_init?

> +bool __init amd_setup_legacy_ssbd(void)
> +{
> +	unsigned int i;
> +
> +	if (boot_cpu_data.x86 != 0x17 || boot_cpu_data.x86_num_siblings == 1)

Maybe better "<= 1", not the least ...

> +		return true;
> +
> +	/*
> +	 * One could be forgiven for thinking that c->x86_max_cores is the
> +	 * correct value to use here.
> +	 *
> +	 * However, that value is derived from the current configuration, and
> +	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
> +	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
> +	 */
> +	if (boot_cpu_data.extended_cpuid_level >= 0x80000008) {
> +		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
> +		ssbd_max_cores /= boot_cpu_data.x86_num_siblings;

... because of this division. I don't know whether we're also susceptible
to this, but I've seen Linux (on top of Xen) being confused enough about
the topology related CPUID data we expose that it ended up running with
the value set to zero (and then exploding e.g. on a similar use).

> +	}
> +	if (!ssbd_max_cores)
> +		return false;
> +
> +	/* Max is two sockets for Fam17h hardware. */
> +	ssbd_core = xzalloc_array(struct ssbd_core, ssbd_max_cores * 2);
> +	if (!ssbd_core)
> +		return false;
> +
> +	for (i = 0; i < ssbd_max_cores * 2; i++) {
> +		spin_lock_init(&ssbd_core[i].lock);
> +		/* Record the current state. */
> +		ssbd_core[i].count = opt_ssbd ?
> +				     boot_cpu_data.x86_num_siblings : 0;
> +	}
> +
> +	return true;
> +}
> +
> +void amd_set_legacy_ssbd(bool enable)
> +{
> +	const struct cpuinfo_x86 *c = &current_cpu_data;
> +	struct ssbd_core *core;
> +	unsigned long flags;
> +
> +	if (c->x86 != 0x17 || c->x86_num_siblings == 1) {
> +		set_legacy_ssbd(c, enable);
> +		return;
> +	}
> +
> +	ASSERT(c->phys_proc_id < 2);
> +	ASSERT(c->cpu_core_id < ssbd_max_cores);
> +	core = &ssbd_core[c->phys_proc_id * ssbd_max_cores + c->cpu_core_id];
> +	spin_lock_irqsave(&core->lock, flags);

May I suggest a brief comment on the irqsave aspect here? Aiui when
called from vmexit_virt_spec_ctrl() while we're still in a GIF=0
section, IF is 1 and hence check_lock() would be unhappy (albeit in
a false positive way).

> +	core->count += enable ? 1 : -1;
> +	ASSERT(core->count <= c->x86_num_siblings);
> +	if ((enable  && core->count == 1) ||
> +	    (!enable && core->count == 0))

Maybe simply "if ( core->count == enable )"? Or do compilers not like
comparisons with booleans?

> --- a/xen/arch/x86/spec_ctrl.c
> +++ b/xen/arch/x86/spec_ctrl.c
> @@ -22,6 +22,7 @@
>  #include <xen/param.h>
>  #include <xen/warning.h>
>  
> +#include <asm/amd.h>
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/microcode.h>
>  #include <asm/msr.h>
> @@ -1056,7 +1057,8 @@ void __init init_speculation_mitigations(void)
>              setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
>      }
>  
> -    if ( opt_msr_sc_hvm && cpu_has_virt_ssbd )
> +    if ( opt_msr_sc_hvm && (cpu_has_virt_ssbd ||
> +         (boot_cpu_has(X86_FEATURE_LEGACY_SSBD) && amd_setup_legacy_ssbd())) )

Nit: I think such expressions are better wrapped such that
indentation expresses the number of pending open parentheses.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests
  2022-02-01 16:46 [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
                   ` (2 preceding siblings ...)
  2022-02-01 16:46 ` [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
@ 2022-02-14 20:46 ` Andrew Cooper
  3 siblings, 0 replies; 16+ messages in thread
From: Andrew Cooper @ 2022-02-14 20:46 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: George Dunlap, Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

On 01/02/2022 16:46, Roger Pau Monne wrote:
> Hello,
>
> The following series implements support for MSR_VIRT_SPEC_CTRL on
> different AMD CPU families.
>
> Note that the support is added backwards, starting with the newer CPUs
> that support MSR_SPEC_CTRL and moving to the older ones either using
> MSR_VIRT_SPEC_CTRL or the SSBD bit in LS_CFG.
>
> First patch is quite clean, as it uses the shadow SPEC_CTRL in order to
> set the SSBD bit and have it context switched by Xen using the existing
> logic recently added.
>
> The next two patches introduce a different way to context switch SSBD
> either depending on the underlying SSBD support, so it's either using
> VIRT_SPEC_CTRL or the LS_CFG MSR. They also kind of overload the usage of
> several spec_ctrl variables in the hypervisor in order to store the
> status of SSBD even when not using MSR_SPEC_CTRL itself. I've tried to
> document those in the commit messages, but it could be controversial.
>
> Thanks, Roger.

I suspect it would help reviewing things to state what the end result is
intended to be.

1) Xen should use the AMD provided algorithm for engaging SSBD itself. 
This includes using MSR_VIRT_SPEC_CTRL if Xen is nested under another
hypervisor.  In the current code, this is implemented by amd_init_ssbd()
even if only limited to boot paths for simplicity.

2) On Fam15h thru Zen1, Xen should expose MSR_VIRT_SPEC_CTRL to guests
by default to abstract away the model and/or hypervisor specific
differences away in MSR_LS_CFG/MSR_VIRT_SPEC_CTRL.

3) On Zen2 and later, MSR_SPEC_CTRL exists and should be used in
preference.  However, for migration compatibility, Xen should be capable
of offering MSR_VIRT_SPEC_CTRL to guests (max, not default) implemented
in terms of MSR_SPEC_CTRL.

This way, a VM levelled to run on Zen1 and Zen2 sees MSR_VIRT_SPEC_CTRL
and can use it on both hosts, whereas a VM only intending to run on Zen2
gets MSR_SPEC_CTRL by default.

Obviously this means that a VM on Zen2 can opt in to MSR_VIRT_SPEC_CTRL
because of how max vs default works and this is a legal configuration,
even if it's not one you'd expect to see outside of testing scenarios.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-02-14 15:07   ` Jan Beulich
@ 2022-03-09 15:03     ` Roger Pau Monné
  2022-03-09 15:40       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2022-03-09 15:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

On Mon, Feb 14, 2022 at 04:07:09PM +0100, Jan Beulich wrote:
> On 01.02.2022 17:46, Roger Pau Monne wrote:
> > Use the logic to set shadow SPEC_CTRL values in order to implement
> > support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> > guests. This includes using the spec_ctrl vCPU MSR variable to store
> > the guest set value of VIRT_SPEC_CTRL.SSBD.
> 
> This leverages the guest running on the OR of host and guest values,
> aiui. If so, this could do with spelling out.
> 
> > Note that VIRT_SSBD is only set in the HVM max CPUID policy, as the
> > default should be to expose SPEC_CTRL only and support VIRT_SPEC_CTRL
> > for migration compatibility.
> 
> I'm afraid I don't understand this last statement: How would this be
> about migration compatibility? No guest so far can use VIRT_SPEC_CTRL,
> and a future guest using it is unlikely to be able to cope with the
> MSR "disappearing" during migration.

Maybe I didn't express this correctly. What I meant to explain is that
on hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
policy so it can be enabled for compatibility purposes. Does this make
sense?

> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -2273,8 +2273,9 @@ to use.
> >  * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
> >    respectively.
> >  * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
> > -  on entry and exit.  These blocks are necessary to virtualise support for
> > -  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
> > +  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to
> 
> Why would Xen be manipulating an MSR it only brings into existence for its
> guests?

Well, that's not exactly true. Xen does use VIRT_SPEC_CTRL (see
amd_init_ssbd).

I'm unsure how to express support for VIRT_SPEC_CTRL, as it does rely
on SPEC_CTRL when available.

> > --- a/xen/arch/x86/cpuid.c
> > +++ b/xen/arch/x86/cpuid.c
> > @@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
> >          __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
> >          __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
> >      }
> > +    else
> > +        /*
> > +         * If SPEC_CTRL is available VIRT_SPEC_CTRL can also be implemented as
> > +         * it's a subset of the controls exposed in SPEC_CTRL (SSBD only).
> > +         * Expose in the max policy for compatibility migration.
> > +         */
> > +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> 
> This means even Intel guests can use the feature then? I thought it was
> meanwhile deemed bad to offer such cross-vendor features?

No, we shouldn't expose to Intel.

> Additionally, is SPEC_CTRL (i.e. IBRS) availability enough? Don't you
> need AMD_SSBD as a prereq (which may want expressing in gen-cpuid.py)?

We need AMD_SSBD if we implement VIRT_SPEC_CTRL on top of SPEC_CTRL,
but we could also implement it on top of VIRT_SPEC_CTRL (if Xen runs
virtualized) or even using the legacy SSBD setting mechanisms found in
amd_init_ssbd, so I don't think VIRT_SSBD should explicitly depend on
AMD_SSBD in gen-cpuid.py.

> > --- a/xen/arch/x86/include/asm/msr.h
> > +++ b/xen/arch/x86/include/asm/msr.h
> > @@ -291,6 +291,7 @@ struct vcpu_msrs
> >  {
> >      /*
> >       * 0x00000048 - MSR_SPEC_CTRL
> > +     * 0xc001011f - MSR_VIRT_SPEC_CTRL
> >       *
> >       * For PV guests, this holds the guest kernel value.  It is accessed on
> >       * every entry/exit path.
> > @@ -301,7 +302,10 @@ struct vcpu_msrs
> >       * For SVM, the guest value lives in the VMCB, and hardware saves/restores
> >       * the host value automatically.  However, guests run with the OR of the
> >       * host and guest value, which allows Xen to set protections behind the
> > -     * guest's back.
> > +     * guest's back.  Use such functionality in order to implement support for
> > +     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
> > +     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
> > +     * compatible layouts.
> 
> I guess "shadow value" means more like an alternative value, but
> (see above) this is about setting for now just one bit behind the
> guest's back.

Well, the guest sets the bit in VIRT_SPEC_CTRL and Xen sets it on
SPEC_CTRL in order for it to have effect. I can use 'alternative
value' if that's clearer.

> > --- a/xen/arch/x86/spec_ctrl.c
> > +++ b/xen/arch/x86/spec_ctrl.c
> > @@ -395,12 +395,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
> >       * mitigation support for guests.
> >       */
> >  #ifdef CONFIG_HVM
> > -    printk("  Support for HVM VMs:%s%s%s%s%s\n",
> > +    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
> >             (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
> >              boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
> >              boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
> >              opt_eager_fpu)                           ? ""               : " None",
> >             boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
> > +           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
> >             boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
> >             opt_eager_fpu                             ? " EAGER_FPU"     : "",
> >             boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
> 
> The output getting longish, can the two SC_MSR_HVM dependent items
> perhaps be folded, e.g. by making it "MSR_{,VIRT_}SPEC_CTRL"?

OK, but further patches will add MSR_VIRT_SPEC_CTRL to hardware that
doesn't expose MSR_SPEC_CTRL to guests, at which point it could be
confusing?

> > --- a/xen/include/public/arch-x86/cpufeatureset.h
> > +++ b/xen/include/public/arch-x86/cpufeatureset.h
> > @@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
> >  XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
> >  XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
> >  XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
> > -XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
> > +XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
> 
> What is the ! intended to cover here? From guest perspective the
> MSR acts entirely normally afaict.

I've used the ! to note that VIRT_SSBD might be exposed on hardware
whether it's not available as part of the host featureset. It did seem
to me that using just 's' didn't reflect this properly.

According to my reading of the comment at the top '!' is not used to
signal that the feature might act differently, but just that it's
presence cannot be properly expressed with just the A, S, H flags,
which would be the case for VIRT_SSBD I think.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-09 15:03     ` Roger Pau Monné
@ 2022-03-09 15:40       ` Jan Beulich
  2022-03-09 16:31         ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2022-03-09 15:40 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

On 09.03.2022 16:03, Roger Pau Monné wrote:
> On Mon, Feb 14, 2022 at 04:07:09PM +0100, Jan Beulich wrote:
>> On 01.02.2022 17:46, Roger Pau Monne wrote:
>>> Use the logic to set shadow SPEC_CTRL values in order to implement
>>> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
>>> guests. This includes using the spec_ctrl vCPU MSR variable to store
>>> the guest set value of VIRT_SPEC_CTRL.SSBD.
>>
>> This leverages the guest running on the OR of host and guest values,
>> aiui. If so, this could do with spelling out.
>>
>>> Note that VIRT_SSBD is only set in the HVM max CPUID policy, as the
>>> default should be to expose SPEC_CTRL only and support VIRT_SPEC_CTRL
>>> for migration compatibility.
>>
>> I'm afraid I don't understand this last statement: How would this be
>> about migration compatibility? No guest so far can use VIRT_SPEC_CTRL,
>> and a future guest using it is unlikely to be able to cope with the
>> MSR "disappearing" during migration.
> 
> Maybe I didn't express this correctly. What I meant to explain is that
> on hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
> default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
> policy so it can be enabled for compatibility purposes. Does this make
> sense?

Yes. Can you re-word along these lines?

>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -2273,8 +2273,9 @@ to use.
>>>  * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
>>>    respectively.
>>>  * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
>>> -  on entry and exit.  These blocks are necessary to virtualise support for
>>> -  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
>>> +  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to
>>
>> Why would Xen be manipulating an MSR it only brings into existence for its
>> guests?
> 
> Well, that's not exactly true. Xen does use VIRT_SPEC_CTRL (see
> amd_init_ssbd).
> 
> I'm unsure how to express support for VIRT_SPEC_CTRL, as it does rely
> on SPEC_CTRL when available.

I wonder whether the command line doc needs to go into this level of
detail.

>>> --- a/xen/arch/x86/cpuid.c
>>> +++ b/xen/arch/x86/cpuid.c
>>> @@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
>>>          __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
>>>          __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
>>>      }
>>> +    else
>>> +        /*
>>> +         * If SPEC_CTRL is available VIRT_SPEC_CTRL can also be implemented as
>>> +         * it's a subset of the controls exposed in SPEC_CTRL (SSBD only).
>>> +         * Expose in the max policy for compatibility migration.
>>> +         */
>>> +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
>>
>> This means even Intel guests can use the feature then? I thought it was
>> meanwhile deemed bad to offer such cross-vendor features?
> 
> No, we shouldn't expose to Intel.
> 
>> Additionally, is SPEC_CTRL (i.e. IBRS) availability enough? Don't you
>> need AMD_SSBD as a prereq (which may want expressing in gen-cpuid.py)?
> 
> We need AMD_SSBD if we implement VIRT_SPEC_CTRL on top of SPEC_CTRL,
> but we could also implement it on top of VIRT_SPEC_CTRL (if Xen runs
> virtualized) or even using the legacy SSBD setting mechanisms found in
> amd_init_ssbd, so I don't think VIRT_SSBD should explicitly depend on
> AMD_SSBD in gen-cpuid.py.

Hmm, yes, good point. But when the prereqs cannot be expressed in
gen-cpuid.py, I guess they need to be encoded here.

>>> --- a/xen/arch/x86/include/asm/msr.h
>>> +++ b/xen/arch/x86/include/asm/msr.h
>>> @@ -291,6 +291,7 @@ struct vcpu_msrs
>>>  {
>>>      /*
>>>       * 0x00000048 - MSR_SPEC_CTRL
>>> +     * 0xc001011f - MSR_VIRT_SPEC_CTRL
>>>       *
>>>       * For PV guests, this holds the guest kernel value.  It is accessed on
>>>       * every entry/exit path.
>>> @@ -301,7 +302,10 @@ struct vcpu_msrs
>>>       * For SVM, the guest value lives in the VMCB, and hardware saves/restores
>>>       * the host value automatically.  However, guests run with the OR of the
>>>       * host and guest value, which allows Xen to set protections behind the
>>> -     * guest's back.
>>> +     * guest's back.  Use such functionality in order to implement support for
>>> +     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
>>> +     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
>>> +     * compatible layouts.
>>
>> I guess "shadow value" means more like an alternative value, but
>> (see above) this is about setting for now just one bit behind the
>> guest's back.
> 
> Well, the guest sets the bit in VIRT_SPEC_CTRL and Xen sets it on
> SPEC_CTRL in order for it to have effect. I can use 'alternative
> value' if that's clearer.

Well, as I tried to express in my earlier reply, I view "shadow value"
to mean "alternative value", so replacing wouldn't help. The question
whether it acts like the shadow values we know elsewhere (VMX'es CR0
and CR4, for example). If it does, using the same term is of course
fine. But it didn't look to me as if it would, hence I'd prefer to
avoid ambiguity. But please realize that I may have misunderstood
things ...

>>> --- a/xen/arch/x86/spec_ctrl.c
>>> +++ b/xen/arch/x86/spec_ctrl.c
>>> @@ -395,12 +395,13 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>       * mitigation support for guests.
>>>       */
>>>  #ifdef CONFIG_HVM
>>> -    printk("  Support for HVM VMs:%s%s%s%s%s\n",
>>> +    printk("  Support for HVM VMs:%s%s%s%s%s%s\n",
>>>             (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
>>>              boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
>>>              boot_cpu_has(X86_FEATURE_MD_CLEAR)   ||
>>>              opt_eager_fpu)                           ? ""               : " None",
>>>             boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
>>> +           boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_VIRT_SPEC_CTRL" : "",
>>>             boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
>>>             opt_eager_fpu                             ? " EAGER_FPU"     : "",
>>>             boot_cpu_has(X86_FEATURE_MD_CLEAR)        ? " MD_CLEAR"      : "");
>>
>> The output getting longish, can the two SC_MSR_HVM dependent items
>> perhaps be folded, e.g. by making it "MSR_{,VIRT_}SPEC_CTRL"?
> 
> OK, but further patches will add MSR_VIRT_SPEC_CTRL to hardware that
> doesn't expose MSR_SPEC_CTRL to guests, at which point it could be
> confusing?

Yeah, I obviously hadn't seen adjustments done here by later patches.
When I saw those, I think I understood why you things do this way.

>>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>>> @@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
>>>  XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
>>>  XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
>>>  XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
>>> -XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
>>> +XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
>>
>> What is the ! intended to cover here? From guest perspective the
>> MSR acts entirely normally afaict.
> 
> I've used the ! to note that VIRT_SSBD might be exposed on hardware
> whether it's not available as part of the host featureset. It did seem
> to me that using just 's' didn't reflect this properly.

I wouldn't have assigned such meaning to !. In fact if we emulated
a feature completely, I think it could legitimately show up here
without !. But then again I may also not fully be aware of all of
Andrew's intentions ...

Jan

> According to my reading of the comment at the top '!' is not used to
> signal that the feature might act differently, but just that it's
> presence cannot be properly expressed with just the A, S, H flags,
> which would be the case for VIRT_SSBD I think.
> 
> Thanks, Roger.
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-09 15:40       ` Jan Beulich
@ 2022-03-09 16:31         ` Roger Pau Monné
  2022-03-09 17:04           ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2022-03-09 16:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

On Wed, Mar 09, 2022 at 04:40:24PM +0100, Jan Beulich wrote:
> On 09.03.2022 16:03, Roger Pau Monné wrote:
> > On Mon, Feb 14, 2022 at 04:07:09PM +0100, Jan Beulich wrote:
> >> On 01.02.2022 17:46, Roger Pau Monne wrote:
> >>> Use the logic to set shadow SPEC_CTRL values in order to implement
> >>> support for VIRT_SPEC_CTRL (signaled by VIRT_SSBD CPUID flag) for HVM
> >>> guests. This includes using the spec_ctrl vCPU MSR variable to store
> >>> the guest set value of VIRT_SPEC_CTRL.SSBD.
> >>
> >> This leverages the guest running on the OR of host and guest values,
> >> aiui. If so, this could do with spelling out.
> >>
> >>> Note that VIRT_SSBD is only set in the HVM max CPUID policy, as the
> >>> default should be to expose SPEC_CTRL only and support VIRT_SPEC_CTRL
> >>> for migration compatibility.
> >>
> >> I'm afraid I don't understand this last statement: How would this be
> >> about migration compatibility? No guest so far can use VIRT_SPEC_CTRL,
> >> and a future guest using it is unlikely to be able to cope with the
> >> MSR "disappearing" during migration.
> > 
> > Maybe I didn't express this correctly. What I meant to explain is that
> > on hardware having SPEC_CTRL VIRT_SPEC_CTRL will not be offered by
> > default to guests. VIRT_SPEC_CTRL will only be part of the max CPUID
> > policy so it can be enabled for compatibility purposes. Does this make
> > sense?
> 
> Yes. Can you re-word along these lines?

Sure.

> >>> --- a/docs/misc/xen-command-line.pandoc
> >>> +++ b/docs/misc/xen-command-line.pandoc
> >>> @@ -2273,8 +2273,9 @@ to use.
> >>>  * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
> >>>    respectively.
> >>>  * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
> >>> -  on entry and exit.  These blocks are necessary to virtualise support for
> >>> -  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
> >>> +  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to
> >>
> >> Why would Xen be manipulating an MSR it only brings into existence for its
> >> guests?
> > 
> > Well, that's not exactly true. Xen does use VIRT_SPEC_CTRL (see
> > amd_init_ssbd).
> > 
> > I'm unsure how to express support for VIRT_SPEC_CTRL, as it does rely
> > on SPEC_CTRL when available.
> 
> I wonder whether the command line doc needs to go into this level of
> detail.

Right, so you would be fine with leaving the command line option
description alone.

> >>> --- a/xen/arch/x86/cpuid.c
> >>> +++ b/xen/arch/x86/cpuid.c
> >>> @@ -543,6 +543,13 @@ static void __init calculate_hvm_max_policy(void)
> >>>          __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
> >>>          __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
> >>>      }
> >>> +    else
> >>> +        /*
> >>> +         * If SPEC_CTRL is available VIRT_SPEC_CTRL can also be implemented as
> >>> +         * it's a subset of the controls exposed in SPEC_CTRL (SSBD only).
> >>> +         * Expose in the max policy for compatibility migration.
> >>> +         */
> >>> +        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> >>
> >> This means even Intel guests can use the feature then? I thought it was
> >> meanwhile deemed bad to offer such cross-vendor features?
> > 
> > No, we shouldn't expose to Intel.
> > 
> >> Additionally, is SPEC_CTRL (i.e. IBRS) availability enough? Don't you
> >> need AMD_SSBD as a prereq (which may want expressing in gen-cpuid.py)?
> > 
> > We need AMD_SSBD if we implement VIRT_SPEC_CTRL on top of SPEC_CTRL,
> > but we could also implement it on top of VIRT_SPEC_CTRL (if Xen runs
> > virtualized) or even using the legacy SSBD setting mechanisms found in
> > amd_init_ssbd, so I don't think VIRT_SSBD should explicitly depend on
> > AMD_SSBD in gen-cpuid.py.
> 
> Hmm, yes, good point. But when the prereqs cannot be expressed in
> gen-cpuid.py, I guess they need to be encoded here.

Yes, I've added a dependency on AMD_SSBD here, which was missing.

> >>> --- a/xen/arch/x86/include/asm/msr.h
> >>> +++ b/xen/arch/x86/include/asm/msr.h
> >>> @@ -291,6 +291,7 @@ struct vcpu_msrs
> >>>  {
> >>>      /*
> >>>       * 0x00000048 - MSR_SPEC_CTRL
> >>> +     * 0xc001011f - MSR_VIRT_SPEC_CTRL
> >>>       *
> >>>       * For PV guests, this holds the guest kernel value.  It is accessed on
> >>>       * every entry/exit path.
> >>> @@ -301,7 +302,10 @@ struct vcpu_msrs
> >>>       * For SVM, the guest value lives in the VMCB, and hardware saves/restores
> >>>       * the host value automatically.  However, guests run with the OR of the
> >>>       * host and guest value, which allows Xen to set protections behind the
> >>> -     * guest's back.
> >>> +     * guest's back.  Use such functionality in order to implement support for
> >>> +     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
> >>> +     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
> >>> +     * compatible layouts.
> >>
> >> I guess "shadow value" means more like an alternative value, but
> >> (see above) this is about setting for now just one bit behind the
> >> guest's back.
> > 
> > Well, the guest sets the bit in VIRT_SPEC_CTRL and Xen sets it on
> > SPEC_CTRL in order for it to have effect. I can use 'alternative
> > value' if that's clearer.
> 
> Well, as I tried to express in my earlier reply, I view "shadow value"
> to mean "alternative value", so replacing wouldn't help. The question
> whether it acts like the shadow values we know elsewhere (VMX'es CR0
> and CR4, for example). If it does, using the same term is of course
> fine. But it didn't look to me as if it would, hence I'd prefer to
> avoid ambiguity. But please realize that I may have misunderstood
> things ...

No, you are OK to ask. When developing the series I went back and
forth myself deciding whether 'hijacking' the spec_ctrl field to
implement VIRT_SPEC_CTRL was OK.

If host has AMD_SSBD: VIRT_SPEC_CTRL.SSBD will use the SPEC_CTRL.SSBD
bit in the spec_ctrl field, but it will be set behind the guests back.
If guests sets VIRT_SPEC_CTRL.SSBD but not SPEC_CTRL.SSBD, reads of
SPEC_CTRL.SSBD from guest context will return 0, but the bit will be
set.

I called it 'shadow' because the underlying SPEC_CTRL.SSBD bit will
get set, but reading SPEC_CTRL.SSBD could return 0 if the bit has been
set from VIRT_SPEC_CTRL.

Do you think that's a suitable use of 'shadow'?

> >>> --- a/xen/include/public/arch-x86/cpufeatureset.h
> >>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> >>> @@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S  IBRS provides same-mode protection
> >>>  XEN_CPUFEATURE(NO_LMSL,       8*32+20) /*S  EFER.LMSLE no longer supported. */
> >>>  XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
> >>>  XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*S  MSR_SPEC_CTRL.SSBD available */
> >>> -XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD */
> >>> +XEN_CPUFEATURE(VIRT_SSBD,     8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */
> >>
> >> What is the ! intended to cover here? From guest perspective the
> >> MSR acts entirely normally afaict.
> > 
> > I've used the ! to note that VIRT_SSBD might be exposed on hardware
> > whether it's not available as part of the host featureset. It did seem
> > to me that using just 's' didn't reflect this properly.
> 
> I wouldn't have assigned such meaning to !. In fact if we emulated
> a feature completely, I think it could legitimately show up here
> without !. But then again I may also not fully be aware of all of
> Andrew's intentions ...

Not sure either. I've assumed '!' to mean that such feature could
appear on guest policies even when not present on the host one, but I
might be wrong. I'm happy to use a different annotation here.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL
  2022-03-09 16:31         ` Roger Pau Monné
@ 2022-03-09 17:04           ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2022-03-09 17:04 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

On 09.03.2022 17:31, Roger Pau Monné wrote:
> On Wed, Mar 09, 2022 at 04:40:24PM +0100, Jan Beulich wrote:
>> On 09.03.2022 16:03, Roger Pau Monné wrote:
>>> On Mon, Feb 14, 2022 at 04:07:09PM +0100, Jan Beulich wrote:
>>>> On 01.02.2022 17:46, Roger Pau Monne wrote:
>>>>> --- a/docs/misc/xen-command-line.pandoc
>>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>>> @@ -2273,8 +2273,9 @@ to use.
>>>>>  * `pv=` and `hvm=` offer control over all suboptions for PV and HVM guests
>>>>>    respectively.
>>>>>  * `msr-sc=` offers control over Xen's support for manipulating `MSR_SPEC_CTRL`
>>>>> -  on entry and exit.  These blocks are necessary to virtualise support for
>>>>> -  guests and if disabled, guests will be unable to use IBRS/STIBP/SSBD/etc.
>>>>> +  and/or `MSR_VIRT_SPEC_CTRL` on entry and exit.  These blocks are necessary to
>>>>
>>>> Why would Xen be manipulating an MSR it only brings into existence for its
>>>> guests?
>>>
>>> Well, that's not exactly true. Xen does use VIRT_SPEC_CTRL (see
>>> amd_init_ssbd).
>>>
>>> I'm unsure how to express support for VIRT_SPEC_CTRL, as it does rely
>>> on SPEC_CTRL when available.
>>
>> I wonder whether the command line doc needs to go into this level of
>> detail.
> 
> Right, so you would be fine with leaving the command line option
> description alone.

Yes.

>>>>> --- a/xen/arch/x86/include/asm/msr.h
>>>>> +++ b/xen/arch/x86/include/asm/msr.h
>>>>> @@ -291,6 +291,7 @@ struct vcpu_msrs
>>>>>  {
>>>>>      /*
>>>>>       * 0x00000048 - MSR_SPEC_CTRL
>>>>> +     * 0xc001011f - MSR_VIRT_SPEC_CTRL
>>>>>       *
>>>>>       * For PV guests, this holds the guest kernel value.  It is accessed on
>>>>>       * every entry/exit path.
>>>>> @@ -301,7 +302,10 @@ struct vcpu_msrs
>>>>>       * For SVM, the guest value lives in the VMCB, and hardware saves/restores
>>>>>       * the host value automatically.  However, guests run with the OR of the
>>>>>       * host and guest value, which allows Xen to set protections behind the
>>>>> -     * guest's back.
>>>>> +     * guest's back.  Use such functionality in order to implement support for
>>>>> +     * VIRT_SPEC_CTRL as a shadow value of SPEC_CTRL and thus store the value
>>>>> +     * of VIRT_SPEC_CTRL in this field, taking advantage of both MSRs having
>>>>> +     * compatible layouts.
>>>>
>>>> I guess "shadow value" means more like an alternative value, but
>>>> (see above) this is about setting for now just one bit behind the
>>>> guest's back.
>>>
>>> Well, the guest sets the bit in VIRT_SPEC_CTRL and Xen sets it on
>>> SPEC_CTRL in order for it to have effect. I can use 'alternative
>>> value' if that's clearer.
>>
>> Well, as I tried to express in my earlier reply, I view "shadow value"
>> to mean "alternative value", so replacing wouldn't help. The question
>> whether it acts like the shadow values we know elsewhere (VMX'es CR0
>> and CR4, for example). If it does, using the same term is of course
>> fine. But it didn't look to me as if it would, hence I'd prefer to
>> avoid ambiguity. But please realize that I may have misunderstood
>> things ...
> 
> No, you are OK to ask. When developing the series I went back and
> forth myself deciding whether 'hijacking' the spec_ctrl field to
> implement VIRT_SPEC_CTRL was OK.
> 
> If host has AMD_SSBD: VIRT_SPEC_CTRL.SSBD will use the SPEC_CTRL.SSBD
> bit in the spec_ctrl field, but it will be set behind the guests back.
> If guests sets VIRT_SPEC_CTRL.SSBD but not SPEC_CTRL.SSBD, reads of
> SPEC_CTRL.SSBD from guest context will return 0, but the bit will be
> set.
> 
> I called it 'shadow' because the underlying SPEC_CTRL.SSBD bit will
> get set, but reading SPEC_CTRL.SSBD could return 0 if the bit has been
> set from VIRT_SPEC_CTRL.
> 
> Do you think that's a suitable use of 'shadow'?

Not sure, but since I don't have a good alternative suggestion, please
keep using "shadow".

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-02-14 16:02   ` Jan Beulich
@ 2022-03-10 16:41     ` Roger Pau Monné
  2022-03-11  7:31       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2022-03-10 16:41 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Mon, Feb 14, 2022 at 05:02:52PM +0100, Jan Beulich wrote:
> On 01.02.2022 17:46, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/svm/entry.S
> > +++ b/xen/arch/x86/hvm/svm/entry.S
> > @@ -71,7 +71,9 @@ __UNLIKELY_END(nsvm_hap)
> >              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
> >  1:          /* No Spectre v1 concerns.  Execution will hit VMRUN imminently. */
> >          .endm
> > -        ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> > +        ALTERNATIVE_2 "", STR(call vmentry_virt_spec_ctrl), \
> 
> I'm afraid this violates the "ret" part of the warning a few lines up,
> while ...
> 
> > +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
> > +                      svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> >  
> >          pop  %r15
> >          pop  %r14
> > @@ -111,7 +113,9 @@ __UNLIKELY_END(nsvm_hap)
> >              wrmsr
> >              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
> >          .endm
> > -        ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> > +        ALTERNATIVE_2 "", STR(call vmexit_virt_spec_ctrl), \
> 
> ... this violates ...
> 
> > +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
> > +                      svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> >          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
> 
> ... the "ret" part of this warning.

Hm, so while I could load VIRT_SPEC_CTRL easily from assembly, loading
of the legacy non-architectural setting of SSBD for Fam18h and earlier
it's likely not doable from assembly.

Since those helpers would only set SSBD, isn't it fine to execute a
`ret` after either having set or clear SSBD?

AFAICT the requirement would be to either have loaded SPEC_CTRL first
(if present) in the VM exit path, or to set SSBD before setting
SPEC_CTRL in the VM entry path.

> Furthermore, opposite to what the change to amd_init_ssbd() suggests,
> the ordering of the alternatives here means you prefer SPEC_CTRL over
> VIRT_SPEC_CTRL; see the comment near the top of _apply_alternatives().
> Unless I've missed logic guaranteeing that both of the keyed to
> features can't be active at the same time.

Xen itself will only use a single one (either SPEC_CTRL.SSBD or
VIRT_SPEC_CTRL.SSBD) in order to implement support on behalf of the
guest. amd_init_ssbd already prefer to use SPEC_CTRL.SSBD over
VIRT_SPEC_CTRL.SSBD when both are available, so we aim to do the same
here.

I think part of the confusion steams from using info->{last_spec_ctrl,
xen_spec_ctrl} even when SPEC_CTRL MSR is not used by Xen, I need to
clarify this somehow, maybe by not using those fields in the first
place.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests
  2022-03-10 16:41     ` Roger Pau Monné
@ 2022-03-11  7:31       ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2022-03-11  7:31 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 10.03.2022 17:41, Roger Pau Monné wrote:
> On Mon, Feb 14, 2022 at 05:02:52PM +0100, Jan Beulich wrote:
>> On 01.02.2022 17:46, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>> @@ -71,7 +71,9 @@ __UNLIKELY_END(nsvm_hap)
>>>              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
>>>  1:          /* No Spectre v1 concerns.  Execution will hit VMRUN imminently. */
>>>          .endm
>>> -        ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>> +        ALTERNATIVE_2 "", STR(call vmentry_virt_spec_ctrl), \
>>
>> I'm afraid this violates the "ret" part of the warning a few lines up,
>> while ...
>>
>>> +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
>>> +                      svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>>  
>>>          pop  %r15
>>>          pop  %r14
>>> @@ -111,7 +113,9 @@ __UNLIKELY_END(nsvm_hap)
>>>              wrmsr
>>>              mov    %al, CPUINFO_last_spec_ctrl(%rsp)
>>>          .endm
>>> -        ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>> +        ALTERNATIVE_2 "", STR(call vmexit_virt_spec_ctrl), \
>>
>> ... this violates ...
>>
>>> +                          X86_FEATURE_VIRT_SC_MSR_HVM, \
>>> +                      svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>>
>> ... the "ret" part of this warning.
> 
> Hm, so while I could load VIRT_SPEC_CTRL easily from assembly, loading
> of the legacy non-architectural setting of SSBD for Fam18h and earlier
> it's likely not doable from assembly.
> 
> Since those helpers would only set SSBD, isn't it fine to execute a
> `ret` after either having set or clear SSBD?
> 
> AFAICT the requirement would be to either have loaded SPEC_CTRL first
> (if present) in the VM exit path, or to set SSBD before setting
> SPEC_CTRL in the VM entry path.

Yes, setting SSBD with SPEC_CTRL already / still set ought to be fine.

>> Furthermore, opposite to what the change to amd_init_ssbd() suggests,
>> the ordering of the alternatives here means you prefer SPEC_CTRL over
>> VIRT_SPEC_CTRL; see the comment near the top of _apply_alternatives().
>> Unless I've missed logic guaranteeing that both of the keyed to
>> features can't be active at the same time.
> 
> Xen itself will only use a single one (either SPEC_CTRL.SSBD or
> VIRT_SPEC_CTRL.SSBD) in order to implement support on behalf of the
> guest. amd_init_ssbd already prefer to use SPEC_CTRL.SSBD over
> VIRT_SPEC_CTRL.SSBD when both are available, so we aim to do the same
> here.

Hmm, I can't see the change to init_speculation_mitigations()
guaranteeing that at most one of the two would be enabled.

> I think part of the confusion steams from using info->{last_spec_ctrl,
> xen_spec_ctrl} even when SPEC_CTRL MSR is not used by Xen, I need to
> clarify this somehow, maybe by not using those fields in the first
> place.

I don't think this matters for this particular aspect of my reply.
It was possibly causing some confusion to me, but elsewhere.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-02-14 16:44   ` Jan Beulich
@ 2022-03-14 15:32     ` Roger Pau Monné
  2022-03-14 15:52       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2022-03-14 15:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel

On Mon, Feb 14, 2022 at 05:44:01PM +0100, Jan Beulich wrote:
> On 01.02.2022 17:46, Roger Pau Monne wrote:
> > +	ASSERT(core->count <= c->x86_num_siblings);
> > +	if ((enable  && core->count == 1) ||
> > +	    (!enable && core->count == 0))
> 
> Maybe simply "if ( core->count == enable )"? Or do compilers not like
> comparisons with booleans?

I had it like that, but decided to switch to the current code just
before sending because I think it's clearer. I didn't get complaints
from compilers, but I felt it was kind of abusing to compare a boolean
with and integer.

If you wish I can restore to that form.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD
  2022-03-14 15:32     ` Roger Pau Monné
@ 2022-03-14 15:52       ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2022-03-14 15:52 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, Wei Liu, xen-devel

On 14.03.2022 16:32, Roger Pau Monné wrote:
> On Mon, Feb 14, 2022 at 05:44:01PM +0100, Jan Beulich wrote:
>> On 01.02.2022 17:46, Roger Pau Monne wrote:
>>> +	ASSERT(core->count <= c->x86_num_siblings);
>>> +	if ((enable  && core->count == 1) ||
>>> +	    (!enable && core->count == 0))
>>
>> Maybe simply "if ( core->count == enable )"? Or do compilers not like
>> comparisons with booleans?
> 
> I had it like that, but decided to switch to the current code just
> before sending because I think it's clearer. I didn't get complaints
> from compilers, but I felt it was kind of abusing to compare a boolean
> with and integer.
> 
> If you wish I can restore to that form.

Well, if you don't like that alternative form, and since I don't like
the redundancy, how about

    if ( enable ? core->count == 1 : !core->count )

? It was actually via this transformation how I landed at what I did
suggest.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-03-14 15:52 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-01 16:46 [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-02-01 16:46 ` [PATCH 1/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL Roger Pau Monne
2022-02-14 15:07   ` Jan Beulich
2022-03-09 15:03     ` Roger Pau Monné
2022-03-09 15:40       ` Jan Beulich
2022-03-09 16:31         ` Roger Pau Monné
2022-03-09 17:04           ` Jan Beulich
2022-02-01 16:46 ` [PATCH 2/3] amd/msr: allow passthrough of VIRT_SPEC_CTRL for HVM guests Roger Pau Monne
2022-02-14 16:02   ` Jan Beulich
2022-03-10 16:41     ` Roger Pau Monné
2022-03-11  7:31       ` Jan Beulich
2022-02-01 16:46 ` [PATCH 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests using legacy SSBD Roger Pau Monne
2022-02-14 16:44   ` Jan Beulich
2022-03-14 15:32     ` Roger Pau Monné
2022-03-14 15:52       ` Jan Beulich
2022-02-14 20:46 ` [PATCH 0/3] amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.