All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-4.17 v2 0/3] amd/virt_ssbd: refactoring and fixes
@ 2022-10-28 11:49 Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Roger Pau Monne @ 2022-10-28 11:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry.Wang, Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu,
	George Dunlap, Julien Grall, Stefano Stabellini

Hello,

The following series aims to remove running C code with GIF=0 on the AMD
vm{entry,exit} paths.  As a result, the context switching of SSBD is
done when context switching vCPUs, and hence Xen code is run with the
guest selection of SSBD.

First patch is a bugfix for missing VIRT_SPEC_CTRL MSR loading, while
second takes care of removing the loading of VIRT_SPEC_CTRL on
guest/hypervisor context switch.  Last patch is a cleanup, that's
already reviewed.

I tested on Naples and Milan CPUs (and migrating from Naples to Milan
correctly carrying the VIRT_SSBD bit), but I haven't tested on a
platform that exposes VIRT_SSBD itself.  I think the path is
sufficiently similar to the legacy one.

Currently running a gitlab CI loop in order to check everything is OK.

Roger Pau Monne (3):
  hvm/msr: load VIRT_SPEC_CTRL
  amd/virt_ssbd: set SSBD at vCPU context switch
  amd: remove VIRT_SC_MSR_HVM synthetic feature

 docs/misc/xen-command-line.pandoc      | 10 +++--
 xen/arch/x86/cpu/amd.c                 | 56 ++++++++++++++------------
 xen/arch/x86/cpuid.c                   |  9 +++--
 xen/arch/x86/hvm/hvm.c                 |  1 +
 xen/arch/x86/hvm/svm/entry.S           |  6 ---
 xen/arch/x86/hvm/svm/svm.c             | 49 ++++++++++------------
 xen/arch/x86/include/asm/amd.h         |  3 +-
 xen/arch/x86/include/asm/cpufeatures.h |  2 +-
 xen/arch/x86/msr.c                     |  7 ++++
 xen/arch/x86/spec_ctrl.c               |  8 ++--
 10 files changed, 78 insertions(+), 73 deletions(-)

-- 
2.37.3



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL
  2022-10-28 11:49 [PATCH for-4.17 v2 0/3] amd/virt_ssbd: refactoring and fixes Roger Pau Monne
@ 2022-10-28 11:49 ` Roger Pau Monne
  2022-10-31 16:22   ` Jan Beulich
  2022-11-02 14:20   ` Andrew Cooper
  2022-10-28 11:49 ` [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature Roger Pau Monne
  2 siblings, 2 replies; 17+ messages in thread
From: Roger Pau Monne @ 2022-10-28 11:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry.Wang, Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Add MSR_VIRT_SPEC_CTRL to the list of MSRs handled by
hvm_load_cpu_msrs(), or else it would be lost.

Fixes: 8ffd5496f4 ('amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I'm confused as to why we have two different list of MSR to send and
load, one in msrs_to_send[] and the other open-coded in
hvm_load_cpu_msrs(), but given the release status it's no time to
clean that up.
---
Changes since v1:
 - New in this version.
---
 xen/arch/x86/hvm/hvm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 44b432ec5a..15a9b34c59 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1498,6 +1498,7 @@ static int cf_check hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
         case MSR_INTEL_MISC_FEATURES_ENABLES:
         case MSR_IA32_BNDCFGS:
         case MSR_IA32_XSS:
+        case MSR_VIRT_SPEC_CTRL:
         case MSR_AMD64_DR0_ADDRESS_MASK:
         case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
             rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-10-28 11:49 [PATCH for-4.17 v2 0/3] amd/virt_ssbd: refactoring and fixes Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
@ 2022-10-28 11:49 ` Roger Pau Monne
  2022-10-29  9:47   ` Roger Pau Monné
  2022-10-29 13:12   ` [PATCH for-4.17 v2.1 " Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature Roger Pau Monne
  2 siblings, 2 replies; 17+ messages in thread
From: Roger Pau Monne @ 2022-10-28 11:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry.Wang, Roger Pau Monne, Andrew Cooper, George Dunlap,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

The current logic for AMD SSBD context switches it on every
vm{entry,exit} if the Xen and guest selections don't match.  This is
expensive when not using SPEC_CTRL, and hence should be avoided as
much as possible.

When SSBD is not being set from SPEC_CTRL on AMD don't context switch
at vm{entry,exit} and instead only context switch SSBD when switching
vCPUs.  This has the side effect of running Xen code with the guest
selection of SSBD, the documentation is updated to note this behavior.
Also note that then when `ssbd` is selected on the command line guest
SSBD selection will not have an effect, and the hypervisor will run
with SSBD unconditionally enabled when not using SPEC_CTRL itself.

This fixes an issue with running C code in a GIF=0 region, that's
problematic when using UBSAN or other instrumentation techniques.

As a result of no longer running the code to set SSBD in a GIF=0
region the locking of amd_set_legacy_ssbd() can be done using normal
spinlocks, and some more checks can be added to assure it works as
intended.

Finally it's also worth noticing that since the guest SSBD selection
is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
propagate the value to the hardware as part of handling the wrmsr.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Just check virt_spec_ctrl value != 0 on context switch.
 - Remove stray asm newline.
 - Use val in svm_set_reg().
 - Fix style in amd.c.
 - Do not clear ssbd
---
 docs/misc/xen-command-line.pandoc | 10 +++---
 xen/arch/x86/cpu/amd.c            | 55 +++++++++++++++++--------------
 xen/arch/x86/hvm/svm/entry.S      |  6 ----
 xen/arch/x86/hvm/svm/svm.c        | 49 ++++++++++++---------------
 xen/arch/x86/include/asm/amd.h    |  2 +-
 xen/arch/x86/msr.c                |  7 ++++
 6 files changed, 65 insertions(+), 64 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 0fbdcb574f..424b12cfb2 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2372,10 +2372,12 @@ By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and
 when hardware hints recommend using it as a blanket setting.
 
 On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=`
-option can be used to force or prevent Xen using the feature itself.  On AMD
-hardware, this is a global option applied at boot, and not virtualised for
-guest use.  On Intel hardware, the feature is virtualised for guests,
-independently of Xen's choice of setting.
+option can be used to force or prevent Xen using the feature itself.  The
+feature is virtualised for guests, independently of Xen's choice of setting.
+On AMD hardware, disabling Xen SSBD usage on the command line (`ssbd=0` which
+is the default value) can lead to Xen running with the guest SSBD selection
+depending on hardware support, on the same hardware setting `ssbd=1` will
+result in SSBD always being enabled, regardless of guest choice.
 
 On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=`
 option can be used to force or prevent Xen using the feature itself.  By
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 98c52d0686..05d72c6501 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -742,7 +742,7 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 }
 
 static struct ssbd_ls_cfg {
-    bool locked;
+    spinlock_t lock;
     unsigned int count;
 } __cacheline_aligned *ssbd_ls_cfg;
 static unsigned int __ro_after_init ssbd_max_cores;
@@ -753,7 +753,7 @@ bool __init amd_setup_legacy_ssbd(void)
 	unsigned int i;
 
 	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
-	    boot_cpu_data.x86_num_siblings <= 1)
+	    boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd)
 		return true;
 
 	/*
@@ -776,46 +776,51 @@ bool __init amd_setup_legacy_ssbd(void)
 	if (!ssbd_ls_cfg)
 		return false;
 
-	if (opt_ssbd)
-		for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
-			/* Set initial state, applies to any (hotplug) CPU. */
-			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
+	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
+		spin_lock_init(&ssbd_ls_cfg[i].lock);
 
 	return true;
 }
 
-/*
- * Executed from GIF==0 context: avoid using BUG/ASSERT or other functionality
- * that relies on exceptions as those are not expected to run in GIF==0
- * context.
- */
-void amd_set_legacy_ssbd(bool enable)
+static void core_set_legacy_ssbd(bool enable)
 {
 	const struct cpuinfo_x86 *c = &current_cpu_data;
 	struct ssbd_ls_cfg *status;
+	unsigned long flags;
 
 	if ((c->x86 != 0x17 && c->x86 != 0x18) || c->x86_num_siblings <= 1) {
-		set_legacy_ssbd(c, enable);
+		BUG_ON(!set_legacy_ssbd(c, enable));
 		return;
 	}
 
+	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
 	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
 	                      c->cpu_core_id];
 
-	/*
-	 * Open code a very simple spinlock: this function is used with GIF==0
-	 * and different IF values, so would trigger the checklock detector.
-	 * Instead of trying to workaround the detector, use a very simple lock
-	 * implementation: it's better to reduce the amount of code executed
-	 * with GIF==0.
-	 */
-	while (test_and_set_bool(status->locked))
-		cpu_relax();
+	spin_lock_irqsave(&status->lock, flags);
 	status->count += enable ? 1 : -1;
+	ASSERT(status->count <= c->x86_num_siblings);
 	if (enable ? status->count == 1 : !status->count)
-		set_legacy_ssbd(c, enable);
-	barrier();
-	write_atomic(&status->locked, false);
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&status->lock, flags);
+}
+
+void amd_set_ssbd(bool enable)
+{
+	if (opt_ssbd)
+		/*
+		 * Ignore attempts to turn off SSBD, it's hardcoded on the
+		 * command line.
+		 */
+		return;
+
+	if (cpu_has_virt_ssbd)
+		wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0);
+	else if (amd_legacy_ssbd)
+		core_set_legacy_ssbd(enable);
+	else
+		ASSERT_UNREACHABLE();
 }
 
 /*
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index a26589aa9a..981cd82e7c 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -59,9 +59,6 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
-        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -131,9 +128,6 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
-        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /*
          * STGI is executed unconditionally, and is sufficiently serialising
          * to safely resolve any Spectre-v1 concerns in the above logic.
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 1aeaabcb13..b2f147c11b 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
 
     /* Resume use of ISTs now that the host TR is reinstated. */
     enable_each_ist(idt_tables[cpu]);
+
+    /*
+     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
+     * is already cleared by svm_vmexit_spec_ctrl.
+     */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(false);
+    }
 }
 
 static void cf_check svm_ctxt_switch_to(struct vcpu *v)
@@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
 
     if ( cpu_has_msr_tsc_aux )
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+
+    /* Load SSBD if set by the guest. */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(true);
+    }
 }
 
 static void noreturn cf_check svm_do_resume(void)
@@ -2518,6 +2535,10 @@ static void cf_check svm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
         vmcb->spec_ctrl = val;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        amd_set_ssbd(val & SPEC_CTRL_SSBD);
+        break;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
                __func__, v, reg, val);
@@ -3116,34 +3137,6 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
-/* Called with GIF=0. */
-void vmexit_virt_spec_ctrl(void)
-{
-    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
-
-    if ( val == current->arch.msrs->virt_spec_ctrl.raw )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
-/* Called with GIF=0. */
-void vmentry_virt_spec_ctrl(void)
-{
-    unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
-
-    if ( val == (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index 6a42f68542..81ed71710f 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -153,6 +153,6 @@ void amd_check_disable_c1e(unsigned int port, u8 value);
 
 extern bool amd_legacy_ssbd;
 bool amd_setup_legacy_ssbd(void);
-void amd_set_legacy_ssbd(bool enable);
+void amd_set_ssbd(bool enable);
 
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 95416995a5..a4c28879bc 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -697,7 +697,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
                 msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
         }
         else
+        {
             msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
+            /*
+             * Propagate the value to hardware, as it won't be context switched
+             * on vmentry.
+             */
+            goto set_reg;
+        }
         break;
 
     case MSR_AMD64_DE_CFG:
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature
  2022-10-28 11:49 [PATCH for-4.17 v2 0/3] amd/virt_ssbd: refactoring and fixes Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
  2022-10-28 11:49 ` [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch Roger Pau Monne
@ 2022-10-28 11:49 ` Roger Pau Monne
  2022-11-02  8:45   ` Henry Wang
  2 siblings, 1 reply; 17+ messages in thread
From: Roger Pau Monne @ 2022-10-28 11:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry.Wang, Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Since the VIRT_SPEC_CTRL.SSBD selection is no longer context switched
on vm{entry,exit} there's no need to use a synthetic feature bit for
it anymore.

Remove the bit and instead use a global variable.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/cpu/amd.c                 | 1 +
 xen/arch/x86/cpuid.c                   | 9 +++++----
 xen/arch/x86/include/asm/amd.h         | 1 +
 xen/arch/x86/include/asm/cpufeatures.h | 2 +-
 xen/arch/x86/spec_ctrl.c               | 8 ++++----
 5 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 05d72c6501..11f8e1d359 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -49,6 +49,7 @@ boolean_param("allow_unsafe", opt_allow_unsafe);
 /* Signal whether the ACPI C1E quirk is required. */
 bool __read_mostly amd_acpi_c1e_quirk;
 bool __ro_after_init amd_legacy_ssbd;
+bool __ro_after_init amd_virt_spec_ctrl;
 
 static inline int rdmsr_amd_safe(unsigned int msr, unsigned int *lo,
 				 unsigned int *hi)
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 822f9ace10..acc2f606ce 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -3,6 +3,7 @@
 #include <xen/param.h>
 #include <xen/sched.h>
 #include <xen/nospec.h>
+#include <asm/amd.h>
 #include <asm/cpuid.h>
 #include <asm/hvm/hvm.h>
 #include <asm/hvm/nestedhvm.h>
@@ -543,9 +544,9 @@ static void __init calculate_hvm_max_policy(void)
 
     /*
      * VIRT_SSBD is exposed in the default policy as a result of
-     * VIRT_SC_MSR_HVM being set, it also needs exposing in the max policy.
+     * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
      */
-    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+    if ( amd_virt_spec_ctrl )
         __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     /*
@@ -606,9 +607,9 @@ static void __init calculate_hvm_def_policy(void)
 
     /*
      * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
-     * VIRT_SC_MSR_HVM is set.
+     * amd_virt_spec_ctrl is set.
      */
-    if ( boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) )
+    if ( amd_virt_spec_ctrl )
         __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     sanitise_featureset(hvm_featureset);
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index 81ed71710f..5c100784dd 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -152,6 +152,7 @@ extern bool amd_acpi_c1e_quirk;
 void amd_check_disable_c1e(unsigned int port, u8 value);
 
 extern bool amd_legacy_ssbd;
+extern bool amd_virt_spec_ctrl;
 bool amd_setup_legacy_ssbd(void);
 void amd_set_ssbd(bool enable);
 
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index 3895de4faf..efd3a667ef 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -24,7 +24,7 @@ XEN_CPUFEATURE(APERFMPERF,        X86_SYNTH( 8)) /* APERFMPERF */
 XEN_CPUFEATURE(MFENCE_RDTSC,      X86_SYNTH( 9)) /* MFENCE synchronizes RDTSC */
 XEN_CPUFEATURE(XEN_SMEP,          X86_SYNTH(10)) /* SMEP gets used by Xen itself */
 XEN_CPUFEATURE(XEN_SMAP,          X86_SYNTH(11)) /* SMAP gets used by Xen itself */
-XEN_CPUFEATURE(VIRT_SC_MSR_HVM,   X86_SYNTH(12)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */
+/* Bit 12 unused. */
 XEN_CPUFEATURE(IND_THUNK_LFENCE,  X86_SYNTH(13)) /* Use IND_THUNK_LFENCE */
 XEN_CPUFEATURE(IND_THUNK_JMP,     X86_SYNTH(14)) /* Use IND_THUNK_JMP */
 XEN_CPUFEATURE(SC_NO_BRANCH_HARDEN, X86_SYNTH(15)) /* (Disable) Conditional branch hardening */
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 4e53056624..0b94af6b86 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -514,12 +514,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ||
             boot_cpu_has(X86_FEATURE_IBPB_ENTRY_HVM) ||
-            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ||
+            amd_virt_spec_ctrl ||
             opt_eager_fpu || opt_md_clear_hvm)       ? ""               : " None",
            boot_cpu_has(X86_FEATURE_SC_MSR_HVM)      ? " MSR_SPEC_CTRL" : "",
            (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ||
-            boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL"
-                                                       : "",
+            amd_virt_spec_ctrl)                      ? " MSR_VIRT_SPEC_CTRL"
+                                                     : "",
            boot_cpu_has(X86_FEATURE_SC_RSB_HVM)      ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            opt_md_clear_hvm                          ? " MD_CLEAR"      : "",
@@ -1247,7 +1247,7 @@ void __init init_speculation_mitigations(void)
     /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */
     if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd &&
          (cpu_has_virt_ssbd || (amd_legacy_ssbd && amd_setup_legacy_ssbd())) )
-        setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM);
+        amd_virt_spec_ctrl = true;
 
     /* Figure out default_xen_spec_ctrl. */
     if ( has_spec_ctrl && ibrs )
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-10-28 11:49 ` [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch Roger Pau Monne
@ 2022-10-29  9:47   ` Roger Pau Monné
  2022-10-29 13:12   ` [PATCH for-4.17 v2.1 " Roger Pau Monne
  1 sibling, 0 replies; 17+ messages in thread
From: Roger Pau Monné @ 2022-10-29  9:47 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, Henry.Wang, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

On Fri, Oct 28, 2022 at 01:49:12PM +0200, Roger Pau Monne wrote:
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index 95416995a5..a4c28879bc 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -697,7 +697,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
>          }
>          else
> +        {
>              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
> +            /*
> +             * Propagate the value to hardware, as it won't be context switched
> +             * on vmentry.
> +             */
> +            goto set_reg;

Doing this when v != curr is wrong, will send an updated patch.

Roger.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-10-28 11:49 ` [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch Roger Pau Monne
  2022-10-29  9:47   ` Roger Pau Monné
@ 2022-10-29 13:12   ` Roger Pau Monne
  2022-11-02 11:49     ` Jan Beulich
  1 sibling, 1 reply; 17+ messages in thread
From: Roger Pau Monne @ 2022-10-29 13:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry.Wang, Roger Pau Monne, Andrew Cooper, George Dunlap,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

The current logic for AMD SSBD context switches it on every
vm{entry,exit} if the Xen and guest selections don't match.  This is
expensive when not using SPEC_CTRL, and hence should be avoided as
much as possible.

When SSBD is not being set from SPEC_CTRL on AMD don't context switch
at vm{entry,exit} and instead only context switch SSBD when switching
vCPUs.  This has the side effect of running Xen code with the guest
selection of SSBD, the documentation is updated to note this behavior.
Also note that then when `ssbd` is selected on the command line guest
SSBD selection will not have an effect, and the hypervisor will run
with SSBD unconditionally enabled when not using SPEC_CTRL itself.

This fixes an issue with running C code in a GIF=0 region, that's
problematic when using UBSAN or other instrumentation techniques.

As a result of no longer running the code to set SSBD in a GIF=0
region the locking of amd_set_legacy_ssbd() can be done using normal
spinlocks, and some more checks can be added to assure it works as
intended.

Finally it's also worth noticing that since the guest SSBD selection
is no longer set on vmentry the VIRT_SPEC_MSR handling needs to
propagate the value to the hardware as part of handling the wrmsr.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Fix calling set_reg unconditionally.

Changes since v1:
 - Just check virt_spec_ctrl value != 0 on context switch.
 - Remove stray asm newline.
 - Use val in svm_set_reg().
 - Fix style in amd.c.
 - Do not clear ssbd
---
 docs/misc/xen-command-line.pandoc | 10 +++---
 xen/arch/x86/cpu/amd.c            | 55 +++++++++++++++++--------------
 xen/arch/x86/hvm/svm/entry.S      |  6 ----
 xen/arch/x86/hvm/svm/svm.c        | 49 ++++++++++++---------------
 xen/arch/x86/include/asm/amd.h    |  2 +-
 xen/arch/x86/msr.c                |  8 +++++
 6 files changed, 66 insertions(+), 64 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 0fbdcb574f..424b12cfb2 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2372,10 +2372,12 @@ By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and
 when hardware hints recommend using it as a blanket setting.
 
 On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=`
-option can be used to force or prevent Xen using the feature itself.  On AMD
-hardware, this is a global option applied at boot, and not virtualised for
-guest use.  On Intel hardware, the feature is virtualised for guests,
-independently of Xen's choice of setting.
+option can be used to force or prevent Xen using the feature itself.  The
+feature is virtualised for guests, independently of Xen's choice of setting.
+On AMD hardware, disabling Xen SSBD usage on the command line (`ssbd=0` which
+is the default value) can lead to Xen running with the guest SSBD selection
+depending on hardware support, on the same hardware setting `ssbd=1` will
+result in SSBD always being enabled, regardless of guest choice.
 
 On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=`
 option can be used to force or prevent Xen using the feature itself.  By
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 98c52d0686..05d72c6501 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -742,7 +742,7 @@ void amd_init_ssbd(const struct cpuinfo_x86 *c)
 }
 
 static struct ssbd_ls_cfg {
-    bool locked;
+    spinlock_t lock;
     unsigned int count;
 } __cacheline_aligned *ssbd_ls_cfg;
 static unsigned int __ro_after_init ssbd_max_cores;
@@ -753,7 +753,7 @@ bool __init amd_setup_legacy_ssbd(void)
 	unsigned int i;
 
 	if ((boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
-	    boot_cpu_data.x86_num_siblings <= 1)
+	    boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd)
 		return true;
 
 	/*
@@ -776,46 +776,51 @@ bool __init amd_setup_legacy_ssbd(void)
 	if (!ssbd_ls_cfg)
 		return false;
 
-	if (opt_ssbd)
-		for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
-			/* Set initial state, applies to any (hotplug) CPU. */
-			ssbd_ls_cfg[i].count = boot_cpu_data.x86_num_siblings;
+	for (i = 0; i < ssbd_max_cores * AMD_FAM17H_MAX_SOCKETS; i++)
+		spin_lock_init(&ssbd_ls_cfg[i].lock);
 
 	return true;
 }
 
-/*
- * Executed from GIF==0 context: avoid using BUG/ASSERT or other functionality
- * that relies on exceptions as those are not expected to run in GIF==0
- * context.
- */
-void amd_set_legacy_ssbd(bool enable)
+static void core_set_legacy_ssbd(bool enable)
 {
 	const struct cpuinfo_x86 *c = &current_cpu_data;
 	struct ssbd_ls_cfg *status;
+	unsigned long flags;
 
 	if ((c->x86 != 0x17 && c->x86 != 0x18) || c->x86_num_siblings <= 1) {
-		set_legacy_ssbd(c, enable);
+		BUG_ON(!set_legacy_ssbd(c, enable));
 		return;
 	}
 
+	BUG_ON(c->phys_proc_id >= AMD_FAM17H_MAX_SOCKETS);
+	BUG_ON(c->cpu_core_id >= ssbd_max_cores);
 	status = &ssbd_ls_cfg[c->phys_proc_id * ssbd_max_cores +
 	                      c->cpu_core_id];
 
-	/*
-	 * Open code a very simple spinlock: this function is used with GIF==0
-	 * and different IF values, so would trigger the checklock detector.
-	 * Instead of trying to workaround the detector, use a very simple lock
-	 * implementation: it's better to reduce the amount of code executed
-	 * with GIF==0.
-	 */
-	while (test_and_set_bool(status->locked))
-		cpu_relax();
+	spin_lock_irqsave(&status->lock, flags);
 	status->count += enable ? 1 : -1;
+	ASSERT(status->count <= c->x86_num_siblings);
 	if (enable ? status->count == 1 : !status->count)
-		set_legacy_ssbd(c, enable);
-	barrier();
-	write_atomic(&status->locked, false);
+		BUG_ON(!set_legacy_ssbd(c, enable));
+	spin_unlock_irqrestore(&status->lock, flags);
+}
+
+void amd_set_ssbd(bool enable)
+{
+	if (opt_ssbd)
+		/*
+		 * Ignore attempts to turn off SSBD, it's hardcoded on the
+		 * command line.
+		 */
+		return;
+
+	if (cpu_has_virt_ssbd)
+		wrmsr(MSR_VIRT_SPEC_CTRL, enable ? SPEC_CTRL_SSBD : 0, 0);
+	else if (amd_legacy_ssbd)
+		core_set_legacy_ssbd(enable);
+	else
+		ASSERT_UNREACHABLE();
 }
 
 /*
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index a26589aa9a..981cd82e7c 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -59,9 +59,6 @@ __UNLIKELY_END(nsvm_hap)
 
         clgi
 
-        ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -131,9 +128,6 @@ __UNLIKELY_END(nsvm_hap)
         ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
-        ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \
-                        X86_FEATURE_VIRT_SC_MSR_HVM
-
         /*
          * STGI is executed unconditionally, and is sufficiently serialising
          * to safely resolve any Spectre-v1 concerns in the above logic.
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 1aeaabcb13..b2f147c11b 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
 
     /* Resume use of ISTs now that the host TR is reinstated. */
     enable_each_ist(idt_tables[cpu]);
+
+    /*
+     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
+     * is already cleared by svm_vmexit_spec_ctrl.
+     */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(false);
+    }
 }
 
 static void cf_check svm_ctxt_switch_to(struct vcpu *v)
@@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
 
     if ( cpu_has_msr_tsc_aux )
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+
+    /* Load SSBD if set by the guest. */
+    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
+    {
+        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
+        amd_set_ssbd(true);
+    }
 }
 
 static void noreturn cf_check svm_do_resume(void)
@@ -2518,6 +2535,10 @@ static void cf_check svm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
         vmcb->spec_ctrl = val;
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        amd_set_ssbd(val & SPEC_CTRL_SSBD);
+        break;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
                __func__, v, reg, val);
@@ -3116,34 +3137,6 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     vmcb_set_vintr(vmcb, intr);
 }
 
-/* Called with GIF=0. */
-void vmexit_virt_spec_ctrl(void)
-{
-    unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0;
-
-    if ( val == current->arch.msrs->virt_spec_ctrl.raw )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
-/* Called with GIF=0. */
-void vmentry_virt_spec_ctrl(void)
-{
-    unsigned int val = current->arch.msrs->virt_spec_ctrl.raw;
-
-    if ( val == (opt_ssbd ? SPEC_CTRL_SSBD : 0) )
-        return;
-
-    if ( cpu_has_virt_ssbd )
-        wrmsr(MSR_VIRT_SPEC_CTRL, val, 0);
-    else
-        amd_set_legacy_ssbd(val);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index 6a42f68542..81ed71710f 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -153,6 +153,6 @@ void amd_check_disable_c1e(unsigned int port, u8 value);
 
 extern bool amd_legacy_ssbd;
 bool amd_setup_legacy_ssbd(void);
-void amd_set_legacy_ssbd(bool enable);
+void amd_set_ssbd(bool enable);
 
 #endif /* __AMD_H__ */
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 95416995a5..d15185cd48 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
                 msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
         }
         else
+        {
             msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
+            if ( v == curr )
+                /*
+                 * Propagate the value to hardware, as it won't be context
+                 * switched on vmentry.
+                 */
+                goto set_reg;
+        }
         break;
 
     case MSR_AMD64_DE_CFG:
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL
  2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
@ 2022-10-31 16:22   ` Jan Beulich
  2022-11-02  8:46     ` Henry Wang
  2022-11-02 14:20   ` Andrew Cooper
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2022-10-31 16:22 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Henry.Wang, Andrew Cooper, Wei Liu, xen-devel

On 28.10.2022 13:49, Roger Pau Monne wrote:
> Add MSR_VIRT_SPEC_CTRL to the list of MSRs handled by
> hvm_load_cpu_msrs(), or else it would be lost.
> 
> Fixes: 8ffd5496f4 ('amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> ---
> I'm confused as to why we have two different list of MSR to send and
> load, one in msrs_to_send[] and the other open-coded in
> hvm_load_cpu_msrs(), but given the release status it's no time to
> clean that up.

I guess this is an optimization, as looking up the MSR in msrs_to_send[]
would be an inefficient inner loop, with the result only used in a
boolean manner (entry present or not present). I do think though that
both places should have a comment referencing the respectively other one,
so both will (hopefully) be updated together. The same looks to apply to
arch_do_domctl()'s MSR handling, where I've screwed up for XFD{,_ERR}.

I'm puzzled by the ctxt->msr[i]._rsvd checking in the default: case. It
was me who added this 8.5 years ago, but I can't see the value of that
check considering the check in the next to final loop in the function.
Hmm, wait - this had an error checking purpose until commit f61685a66903
("x86: remove defunct init/load/save_msr() hvm_funcs"). I think the check
should have been removed at that point. That'll be a post-4.17 patch, I
suppose ...

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature
  2022-10-28 11:49 ` [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature Roger Pau Monne
@ 2022-11-02  8:45   ` Henry Wang
  0 siblings, 0 replies; 17+ messages in thread
From: Henry Wang @ 2022-11-02  8:45 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jan Beulich, Andrew Cooper, Wei Liu

Hi Roger,

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Subject: [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic
> feature
> 
> Since the VIRT_SPEC_CTRL.SSBD selection is no longer context switched
> on vm{entry,exit} there's no need to use a synthetic feature bit for
> it anymore.
> 
> Remove the bit and instead use a global variable.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Sorry for the late response.

Release-acked-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL
  2022-10-31 16:22   ` Jan Beulich
@ 2022-11-02  8:46     ` Henry Wang
  0 siblings, 0 replies; 17+ messages in thread
From: Henry Wang @ 2022-11-02  8:46 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monne; +Cc: Andrew Cooper, Wei Liu, xen-devel

Hi both,

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Subject: Re: [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL
> 
> On 28.10.2022 13:49, Roger Pau Monne wrote:
> > Add MSR_VIRT_SPEC_CTRL to the list of MSRs handled by
> > hvm_load_cpu_msrs(), or else it would be lost.
> >
> > Fixes: 8ffd5496f4 ('amd/msr: implement VIRT_SPEC_CTRL for HVM guests
> on top of SPEC_CTRL')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Sorry for the late reply.

Release-acked-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-10-29 13:12   ` [PATCH for-4.17 v2.1 " Roger Pau Monne
@ 2022-11-02 11:49     ` Jan Beulich
  2022-11-02 17:38       ` Roger Pau Monné
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2022-11-02 11:49 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 29.10.2022 15:12, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>  
>      /* Resume use of ISTs now that the host TR is reinstated. */
>      enable_each_ist(idt_tables[cpu]);
> +
> +    /*
> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> +     * is already cleared by svm_vmexit_spec_ctrl.
> +     */
> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> +    {
> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> +        amd_set_ssbd(false);
> +    }
>  }

Aren't you potentially turning off SSBD here just to ...

> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
>  
>      if ( cpu_has_msr_tsc_aux )
>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> +
> +    /* Load SSBD if set by the guest. */
> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> +    {
> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> +        amd_set_ssbd(true);
> +    }
>  }

... turn it on here again? IOW wouldn't switching better be isolated to
just svm_ctxt_switch_to(), doing nothing if already in the intended mode?

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
>          }
>          else
> +        {
>              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
> +            if ( v == curr )
> +                /*
> +                 * Propagate the value to hardware, as it won't be context
> +                 * switched on vmentry.
> +                 */

I have to admit that I find "on vmentry" in the comment misleading: Reading
it I first thought you're still alluding to the old model. Plus I also find
the combination of "context switched" and "on vmentry" problematic, as we
generally mean something else when we say "context switch".

> +                goto set_reg;

It's not clear why you want to use hvm_set_reg() in the first place - the
comment says "propagate to hardware", which would mean wrmsrl() in the
usual case. Here it would mean a direct call to amd_set_ssbd() imo. That
would then also be in line with all other "v == curr" conditionals, none
of which apply to any "goto set_reg". ..._set_reg(), aiui, is meant only
for use in cases where vCPU state needs updating such that proper state
would be loaded later (e.g. during VM entry).

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL
  2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
  2022-10-31 16:22   ` Jan Beulich
@ 2022-11-02 14:20   ` Andrew Cooper
  1 sibling, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2022-11-02 14:20 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Henry.Wang, Jan Beulich, Wei Liu

On 28/10/2022 12:49, Roger Pau Monne wrote:
> Add MSR_VIRT_SPEC_CTRL to the list of MSRs handled by
> hvm_load_cpu_msrs(), or else it would be lost.
>
> Fixes: 8ffd5496f4 ('amd/msr: implement VIRT_SPEC_CTRL for HVM guests on top of SPEC_CTRL')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> I'm confused as to why we have two different list of MSR to send and
> load, one in msrs_to_send[] and the other open-coded in
> hvm_load_cpu_msrs(), but given the release status it's no time to
> clean that up.

It's necessary (for now).

guest_wrmsr() started as only safe in current context.  The conversion
work (to make it safe in remote-but-paused context) is in progress.

e.g. guest_wrmsr()'s call into vmce_wrmsr() will malfunction in
non-current context.  There are probably others (although I think most
of problem went away when restructured the handlers.)

The list is the list of MSRs is the subset known safe for remote
writes.  It should be dropped when guest_wrmsr() is fully safe.

~Andrew

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-02 11:49     ` Jan Beulich
@ 2022-11-02 17:38       ` Roger Pau Monné
  2022-11-03  8:09         ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Roger Pau Monné @ 2022-11-02 17:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
> On 29.10.2022 15:12, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
> >  
> >      /* Resume use of ISTs now that the host TR is reinstated. */
> >      enable_each_ist(idt_tables[cpu]);
> > +
> > +    /*
> > +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> > +     * is already cleared by svm_vmexit_spec_ctrl.
> > +     */
> > +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> > +    {
> > +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> > +        amd_set_ssbd(false);
> > +    }
> >  }
> 
> Aren't you potentially turning off SSBD here just to ...
> 
> > @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> >  
> >      if ( cpu_has_msr_tsc_aux )
> >          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> > +
> > +    /* Load SSBD if set by the guest. */
> > +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> > +    {
> > +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> > +        amd_set_ssbd(true);
> > +    }
> >  }
> 
> ... turn it on here again? IOW wouldn't switching better be isolated to
> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?

What if we switch from a HVM vCPU into a PV one?  AFAICT then
svm_ctxt_switch_to() won't get called and we would be running the PV
guest with the previous HVM domain SSBD selection.


> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> >                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> >          }
> >          else
> > +        {
> >              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
> > +            if ( v == curr )
> > +                /*
> > +                 * Propagate the value to hardware, as it won't be context
> > +                 * switched on vmentry.
> > +                 */
> 
> I have to admit that I find "on vmentry" in the comment misleading: Reading
> it I first thought you're still alluding to the old model. Plus I also find
> the combination of "context switched" and "on vmentry" problematic, as we
> generally mean something else when we say "context switch".

I had a hard time wording this, because of the Xen/guest vs vCPU
context switches.

What about:

"Propagate the value to hardware, as it won't we set on guest resume
path."


> > +                goto set_reg;
> 
> It's not clear why you want to use hvm_set_reg() in the first place - the
> comment says "propagate to hardware", which would mean wrmsrl() in the
> usual case. Here it would mean a direct call to amd_set_ssbd() imo. That
> would then also be in line with all other "v == curr" conditionals, none
> of which apply to any "goto set_reg". ..._set_reg(), aiui, is meant only
> for use in cases where vCPU state needs updating such that proper state
> would be loaded later (e.g. during VM entry).

I thought it was better to hide those vendor specific calls in the
already existing vendor hooks (set_reg).  I don't mind calling
amd_set_ssbd() directly here if that's preferred, it seemed kind of a
layering violation when we have vendor specific hooks in place.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-02 17:38       ` Roger Pau Monné
@ 2022-11-03  8:09         ` Jan Beulich
  2022-11-03  8:52           ` Roger Pau Monné
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2022-11-03  8:09 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 02.11.2022 18:38, Roger Pau Monné wrote:
> On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
>> On 29.10.2022 15:12, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>>>  
>>>      /* Resume use of ISTs now that the host TR is reinstated. */
>>>      enable_each_ist(idt_tables[cpu]);
>>> +
>>> +    /*
>>> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
>>> +     * is already cleared by svm_vmexit_spec_ctrl.
>>> +     */
>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>> +    {
>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>> +        amd_set_ssbd(false);
>>> +    }
>>>  }
>>
>> Aren't you potentially turning off SSBD here just to ...
>>
>>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
>>>  
>>>      if ( cpu_has_msr_tsc_aux )
>>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>> +
>>> +    /* Load SSBD if set by the guest. */
>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>> +    {
>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>> +        amd_set_ssbd(true);
>>> +    }
>>>  }
>>
>> ... turn it on here again? IOW wouldn't switching better be isolated to
>> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
> 
> What if we switch from a HVM vCPU into a PV one?  AFAICT then
> svm_ctxt_switch_to() won't get called and we would be running the PV
> guest with the previous HVM domain SSBD selection.

Would that be a problem? Or in other words: What is the intended behavior
for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
to guarantee is that we respect their choice there.

>>> --- a/xen/arch/x86/msr.c
>>> +++ b/xen/arch/x86/msr.c
>>> @@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>>                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
>>>          }
>>>          else
>>> +        {
>>>              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
>>> +            if ( v == curr )
>>> +                /*
>>> +                 * Propagate the value to hardware, as it won't be context
>>> +                 * switched on vmentry.
>>> +                 */
>>
>> I have to admit that I find "on vmentry" in the comment misleading: Reading
>> it I first thought you're still alluding to the old model. Plus I also find
>> the combination of "context switched" and "on vmentry" problematic, as we
>> generally mean something else when we say "context switch".
> 
> I had a hard time wording this, because of the Xen/guest vs vCPU
> context switches.
> 
> What about:
> 
> "Propagate the value to hardware, as it won't we set on guest resume
> path."

Sounds better, thanks (with s/we/be/).

>>> +                goto set_reg;
>>
>> It's not clear why you want to use hvm_set_reg() in the first place - the
>> comment says "propagate to hardware", which would mean wrmsrl() in the
>> usual case. Here it would mean a direct call to amd_set_ssbd() imo. That
>> would then also be in line with all other "v == curr" conditionals, none
>> of which apply to any "goto set_reg". ..._set_reg(), aiui, is meant only
>> for use in cases where vCPU state needs updating such that proper state
>> would be loaded later (e.g. during VM entry).
> 
> I thought it was better to hide those vendor specific calls in the
> already existing vendor hooks (set_reg).  I don't mind calling
> amd_set_ssbd() directly here if that's preferred, it seemed kind of a
> layering violation when we have vendor specific hooks in place.

Well, Andrew of course should correct me if I'm wrong, but my understanding
of the get/set-reg interface is as described. On which grounds I don't see
any layering violation here - doing the call right here is merely a more
involved flavor of wrmsrl().

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-03  8:09         ` Jan Beulich
@ 2022-11-03  8:52           ` Roger Pau Monné
  2022-11-03  9:01             ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Roger Pau Monné @ 2022-11-03  8:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On Thu, Nov 03, 2022 at 09:09:41AM +0100, Jan Beulich wrote:
> On 02.11.2022 18:38, Roger Pau Monné wrote:
> > On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
> >> On 29.10.2022 15:12, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/svm/svm.c
> >>> +++ b/xen/arch/x86/hvm/svm/svm.c
> >>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
> >>>  
> >>>      /* Resume use of ISTs now that the host TR is reinstated. */
> >>>      enable_each_ist(idt_tables[cpu]);
> >>> +
> >>> +    /*
> >>> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> >>> +     * is already cleared by svm_vmexit_spec_ctrl.
> >>> +     */
> >>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> >>> +    {
> >>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> >>> +        amd_set_ssbd(false);
> >>> +    }
> >>>  }
> >>
> >> Aren't you potentially turning off SSBD here just to ...
> >>
> >>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> >>>  
> >>>      if ( cpu_has_msr_tsc_aux )
> >>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> >>> +
> >>> +    /* Load SSBD if set by the guest. */
> >>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> >>> +    {
> >>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> >>> +        amd_set_ssbd(true);
> >>> +    }
> >>>  }
> >>
> >> ... turn it on here again? IOW wouldn't switching better be isolated to
> >> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
> > 
> > What if we switch from a HVM vCPU into a PV one?  AFAICT then
> > svm_ctxt_switch_to() won't get called and we would be running the PV
> > guest with the previous HVM domain SSBD selection.
> 
> Would that be a problem? Or in other words: What is the intended behavior
> for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
> to guarantee is that we respect their choice there.

If the hardware only supports non-architectural way (LS_CFG) or
VIRT_SPEC_CTRL to set SSBD then PV guests won't be able to change the
setting inherited from a previously running HVM guest. IMO it's fine
to run Xen code with the guest selection of SSBD, but carrying such
selection (ie: SSBD set) across guest context switches will be a too
big penalty.

> >>> --- a/xen/arch/x86/msr.c
> >>> +++ b/xen/arch/x86/msr.c
> >>> @@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> >>>                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
> >>>          }
> >>>          else
> >>> +        {
> >>>              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
> >>> +            if ( v == curr )
> >>> +                /*
> >>> +                 * Propagate the value to hardware, as it won't be context
> >>> +                 * switched on vmentry.
> >>> +                 */
> >>
> >> I have to admit that I find "on vmentry" in the comment misleading: Reading
> >> it I first thought you're still alluding to the old model. Plus I also find
> >> the combination of "context switched" and "on vmentry" problematic, as we
> >> generally mean something else when we say "context switch".
> > 
> > I had a hard time wording this, because of the Xen/guest vs vCPU
> > context switches.
> > 
> > What about:
> > 
> > "Propagate the value to hardware, as it won't we set on guest resume
> > path."
> 
> Sounds better, thanks (with s/we/be/).

Oh, yes, sorry.

> 
> >>> +                goto set_reg;
> >>
> >> It's not clear why you want to use hvm_set_reg() in the first place - the
> >> comment says "propagate to hardware", which would mean wrmsrl() in the
> >> usual case. Here it would mean a direct call to amd_set_ssbd() imo. That
> >> would then also be in line with all other "v == curr" conditionals, none
> >> of which apply to any "goto set_reg". ..._set_reg(), aiui, is meant only
> >> for use in cases where vCPU state needs updating such that proper state
> >> would be loaded later (e.g. during VM entry).
> > 
> > I thought it was better to hide those vendor specific calls in the
> > already existing vendor hooks (set_reg).  I don't mind calling
> > amd_set_ssbd() directly here if that's preferred, it seemed kind of a
> > layering violation when we have vendor specific hooks in place.
> 
> Well, Andrew of course should correct me if I'm wrong, but my understanding
> of the get/set-reg interface is as described. On which grounds I don't see
> any layering violation here - doing the call right here is merely a more
> involved flavor of wrmsrl().

OK, will change, but first we need an agreement on the SSBD context
switch comment.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-03  8:52           ` Roger Pau Monné
@ 2022-11-03  9:01             ` Jan Beulich
  2022-11-03 10:36               ` Roger Pau Monné
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2022-11-03  9:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.2022 09:52, Roger Pau Monné wrote:
> On Thu, Nov 03, 2022 at 09:09:41AM +0100, Jan Beulich wrote:
>> On 02.11.2022 18:38, Roger Pau Monné wrote:
>>> On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
>>>> On 29.10.2022 15:12, Roger Pau Monne wrote:
>>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>>>>>  
>>>>>      /* Resume use of ISTs now that the host TR is reinstated. */
>>>>>      enable_each_ist(idt_tables[cpu]);
>>>>> +
>>>>> +    /*
>>>>> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
>>>>> +     * is already cleared by svm_vmexit_spec_ctrl.
>>>>> +     */
>>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>>>> +    {
>>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>>>> +        amd_set_ssbd(false);
>>>>> +    }
>>>>>  }
>>>>
>>>> Aren't you potentially turning off SSBD here just to ...
>>>>
>>>>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
>>>>>  
>>>>>      if ( cpu_has_msr_tsc_aux )
>>>>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>>>> +
>>>>> +    /* Load SSBD if set by the guest. */
>>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>>>> +    {
>>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>>>> +        amd_set_ssbd(true);
>>>>> +    }
>>>>>  }
>>>>
>>>> ... turn it on here again? IOW wouldn't switching better be isolated to
>>>> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
>>>
>>> What if we switch from a HVM vCPU into a PV one?  AFAICT then
>>> svm_ctxt_switch_to() won't get called and we would be running the PV
>>> guest with the previous HVM domain SSBD selection.
>>
>> Would that be a problem? Or in other words: What is the intended behavior
>> for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
>> to guarantee is that we respect their choice there.
> 
> If the hardware only supports non-architectural way (LS_CFG) or
> VIRT_SPEC_CTRL to set SSBD then PV guests won't be able to change the
> setting inherited from a previously running HVM guest. IMO it's fine
> to run Xen code with the guest selection of SSBD, but carrying such
> selection (ie: SSBD set) across guest context switches will be a too
> big penalty.

Hmm, perhaps. Question then is whether to better turn it off from
paravirt_ctxt_switch_to() (which would take care of the idle domain as
well, if we want it off there rather than considering the idle domain
as "Xen context"). Or, yet another option, don't use
*_ctxt_switch_{from,to}() at all but invoke it directly from
__context_switch().

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-03  9:01             ` Jan Beulich
@ 2022-11-03 10:36               ` Roger Pau Monné
  2022-11-03 14:05                 ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Roger Pau Monné @ 2022-11-03 10:36 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On Thu, Nov 03, 2022 at 10:01:49AM +0100, Jan Beulich wrote:
> On 03.11.2022 09:52, Roger Pau Monné wrote:
> > On Thu, Nov 03, 2022 at 09:09:41AM +0100, Jan Beulich wrote:
> >> On 02.11.2022 18:38, Roger Pau Monné wrote:
> >>> On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
> >>>> On 29.10.2022 15:12, Roger Pau Monne wrote:
> >>>>> --- a/xen/arch/x86/hvm/svm/svm.c
> >>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
> >>>>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
> >>>>>  
> >>>>>      /* Resume use of ISTs now that the host TR is reinstated. */
> >>>>>      enable_each_ist(idt_tables[cpu]);
> >>>>> +
> >>>>> +    /*
> >>>>> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
> >>>>> +     * is already cleared by svm_vmexit_spec_ctrl.
> >>>>> +     */
> >>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> >>>>> +    {
> >>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> >>>>> +        amd_set_ssbd(false);
> >>>>> +    }
> >>>>>  }
> >>>>
> >>>> Aren't you potentially turning off SSBD here just to ...
> >>>>
> >>>>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
> >>>>>  
> >>>>>      if ( cpu_has_msr_tsc_aux )
> >>>>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> >>>>> +
> >>>>> +    /* Load SSBD if set by the guest. */
> >>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
> >>>>> +    {
> >>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
> >>>>> +        amd_set_ssbd(true);
> >>>>> +    }
> >>>>>  }
> >>>>
> >>>> ... turn it on here again? IOW wouldn't switching better be isolated to
> >>>> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
> >>>
> >>> What if we switch from a HVM vCPU into a PV one?  AFAICT then
> >>> svm_ctxt_switch_to() won't get called and we would be running the PV
> >>> guest with the previous HVM domain SSBD selection.
> >>
> >> Would that be a problem? Or in other words: What is the intended behavior
> >> for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
> >> to guarantee is that we respect their choice there.
> > 
> > If the hardware only supports non-architectural way (LS_CFG) or
> > VIRT_SPEC_CTRL to set SSBD then PV guests won't be able to change the
> > setting inherited from a previously running HVM guest. IMO it's fine
> > to run Xen code with the guest selection of SSBD, but carrying such
> > selection (ie: SSBD set) across guest context switches will be a too
> > big penalty.
> 
> Hmm, perhaps. Question then is whether to better turn it off from
> paravirt_ctxt_switch_to() (which would take care of the idle domain as
> well, if we want it off there rather than considering the idle domain
> as "Xen context"). Or, yet another option, don't use
> *_ctxt_switch_{from,to}() at all but invoke it directly from
> __context_switch().

I consider it fine to run the idle domain with the guest SSBD
selection, or else switching to/from the idle domain could cause
toggling of SSBD which is an unneeded penalty.

If there's an specific issue that needs dealing with I'm happy to
adjust, otherwise I think the proposed approach is an acceptable
compromise to avoid excessive toggling of SSBD when not using
SPEC_CTRL.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch
  2022-11-03 10:36               ` Roger Pau Monné
@ 2022-11-03 14:05                 ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2022-11-03 14:05 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Henry.Wang, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.2022 11:36, Roger Pau Monné wrote:
> On Thu, Nov 03, 2022 at 10:01:49AM +0100, Jan Beulich wrote:
>> On 03.11.2022 09:52, Roger Pau Monné wrote:
>>> On Thu, Nov 03, 2022 at 09:09:41AM +0100, Jan Beulich wrote:
>>>> On 02.11.2022 18:38, Roger Pau Monné wrote:
>>>>> On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
>>>>>> On 29.10.2022 15:12, Roger Pau Monne wrote:
>>>>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>>>>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu *v)
>>>>>>>  
>>>>>>>      /* Resume use of ISTs now that the host TR is reinstated. */
>>>>>>>      enable_each_ist(idt_tables[cpu]);
>>>>>>> +
>>>>>>> +    /*
>>>>>>> +     * Clear previous guest selection of SSBD if set.  Note that SPEC_CTRL.SSBD
>>>>>>> +     * is already cleared by svm_vmexit_spec_ctrl.
>>>>>>> +     */
>>>>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>>>>>> +    {
>>>>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>>>>>> +        amd_set_ssbd(false);
>>>>>>> +    }
>>>>>>>  }
>>>>>>
>>>>>> Aren't you potentially turning off SSBD here just to ...
>>>>>>
>>>>>>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu *v)
>>>>>>>  
>>>>>>>      if ( cpu_has_msr_tsc_aux )
>>>>>>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>>>>>> +
>>>>>>> +    /* Load SSBD if set by the guest. */
>>>>>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>>>>>> +    {
>>>>>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>>>>>> +        amd_set_ssbd(true);
>>>>>>> +    }
>>>>>>>  }
>>>>>>
>>>>>> ... turn it on here again? IOW wouldn't switching better be isolated to
>>>>>> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
>>>>>
>>>>> What if we switch from a HVM vCPU into a PV one?  AFAICT then
>>>>> svm_ctxt_switch_to() won't get called and we would be running the PV
>>>>> guest with the previous HVM domain SSBD selection.
>>>>
>>>> Would that be a problem? Or in other words: What is the intended behavior
>>>> for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
>>>> to guarantee is that we respect their choice there.
>>>
>>> If the hardware only supports non-architectural way (LS_CFG) or
>>> VIRT_SPEC_CTRL to set SSBD then PV guests won't be able to change the
>>> setting inherited from a previously running HVM guest. IMO it's fine
>>> to run Xen code with the guest selection of SSBD, but carrying such
>>> selection (ie: SSBD set) across guest context switches will be a too
>>> big penalty.
>>
>> Hmm, perhaps. Question then is whether to better turn it off from
>> paravirt_ctxt_switch_to() (which would take care of the idle domain as
>> well, if we want it off there rather than considering the idle domain
>> as "Xen context"). Or, yet another option, don't use
>> *_ctxt_switch_{from,to}() at all but invoke it directly from
>> __context_switch().
> 
> I consider it fine to run the idle domain with the guest SSBD
> selection, or else switching to/from the idle domain could cause
> toggling of SSBD which is an unneeded penalty.
> 
> If there's an specific issue that needs dealing with I'm happy to
> adjust, otherwise I think the proposed approach is an acceptable
> compromise to avoid excessive toggling of SSBD when not using
> SPEC_CTRL.

Well, perhaps. What I was suggesting would further limit the toggling,
but I'm not going to insist on you going that route.

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-11-03 14:05 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-28 11:49 [PATCH for-4.17 v2 0/3] amd/virt_ssbd: refactoring and fixes Roger Pau Monne
2022-10-28 11:49 ` [PATCH for-4.17 v2 1/3] hvm/msr: load VIRT_SPEC_CTRL Roger Pau Monne
2022-10-31 16:22   ` Jan Beulich
2022-11-02  8:46     ` Henry Wang
2022-11-02 14:20   ` Andrew Cooper
2022-10-28 11:49 ` [PATCH for-4.17 v2 2/3] amd/virt_ssbd: set SSBD at vCPU context switch Roger Pau Monne
2022-10-29  9:47   ` Roger Pau Monné
2022-10-29 13:12   ` [PATCH for-4.17 v2.1 " Roger Pau Monne
2022-11-02 11:49     ` Jan Beulich
2022-11-02 17:38       ` Roger Pau Monné
2022-11-03  8:09         ` Jan Beulich
2022-11-03  8:52           ` Roger Pau Monné
2022-11-03  9:01             ` Jan Beulich
2022-11-03 10:36               ` Roger Pau Monné
2022-11-03 14:05                 ` Jan Beulich
2022-10-28 11:49 ` [PATCH for-4.17 v2 3/3] amd: remove VIRT_SC_MSR_HVM synthetic feature Roger Pau Monne
2022-11-02  8:45   ` Henry Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.