xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
@ 2018-12-03 16:18 Andrew Cooper
  2018-12-03 16:18 ` [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option Andrew Cooper
                   ` (9 more replies)
  0 siblings, 10 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

This is a lingering TODO item from XSA-263.  It adds support AMD's
MSR_VIRT_SPEC_CTRL interface, and changes Xen's "boot time global" SSBD
setting into a per-vcpu setting.

This can be found on:
  git://xenbits.xen.org/people/andrewcoop/xen.git xen-virt-spec-ctrl-v1

The start of the series is some cleanup.  It then teaches Xen to recognise the
available interfaces (including MSR_VIRT_SPEC_CTRL from a hypervisor), then
how to safely context switch the per-core LS_CFG on Fam17h, an finally to
expose support to guests.

I've got some further MSR work coming because we have to fix the
default-leakiness of MSRs in this range, because a guest becomes unsafe to
migrate as soon as it reads any of the pipeline control MSRs.

Andrew Cooper (9):
  x86/spec-ctrl: Drop the bti= command line option
  x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB
  x86/cpuid: Extend the cpuid= command line option to support all named features
  x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  x86/amd: Probe for legacy SSBD interfaces on boot
  x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  x86/amd: Support context switching legacy SSBD interface
  x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests

 docs/misc/xen-command-line.markdown         |  47 ++----
 tools/libxl/libxl_cpuid.c                   |   5 +
 tools/misc/xen-cpuid.c                      |   4 +
 xen/arch/x86/cpu/amd.c                      | 239 +++++++++++++++++++++++++---
 xen/arch/x86/cpuid.c                        | 101 +++++++++---
 xen/arch/x86/domain.c                       |   2 +
 xen/arch/x86/domctl.c                       |   1 +
 xen/arch/x86/hvm/hvm.c                      |   1 +
 xen/arch/x86/msr.c                          |  17 ++
 xen/arch/x86/smpboot.c                      |   3 +
 xen/arch/x86/spec_ctrl.c                    |  67 +-------
 xen/include/asm-x86/cpufeature.h            |   6 +
 xen/include/asm-x86/cpufeatures.h           |   2 +-
 xen/include/asm-x86/msr-index.h             |   3 +
 xen/include/asm-x86/msr.h                   |   9 ++
 xen/include/asm-x86/processor.h             |   2 +
 xen/include/public/arch-x86/cpufeatureset.h |   5 +
 xen/tools/gen-cpuid.py                      |  22 ++-
 18 files changed, 393 insertions(+), 143 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:19   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB Andrew Cooper
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

bti= was introduced with the original Spectre fixes (Jan 2018), but by the
time Speculative Store Bypass came along (May 2018), it was superceeded by the
more generic spec-ctrl=.

Since then, we've had LazyFPU (June 2018) and L1TF (August 2018), which means
noone will be using the option.  Remove it entirely - anyone who happens to
accidentially be using it might now spot Xen complaining about an option it
doesn't understand.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.markdown | 34 ----------------------
 xen/arch/x86/spec_ctrl.c            | 57 -------------------------------------
 2 files changed, 91 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 9028bcd..764f33a 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -250,40 +250,6 @@ and not running softirqs. Reduce this if softirqs are not being run frequently
 enough. Setting this to a high value may cause boot failure, particularly if
 the NMI watchdog is also enabled.
 
-### bti (x86)
-> `= List of [ <bool>, thunk=retpoline|lfence|jmp, ibrs=<bool>, ibpb=<bool>, rsb=<bool>, rsb_{vmexit,native}=<bool> ]`
-
-**WARNING: This command line option is deprecated, and superseded by
-_spec-ctrl=_ - using both options in combination is undefined.**
-
-Branch Target Injection controls.  By default, Xen will pick the most
-appropriate BTI mitigations based on compiled in support, loaded microcode,
-and hardware details.
-
-**WARNING: Any use of this option may interfere with heuristics.  Use with
-extreme care.**
-
-A (negative) boolean value can be specified to turn off all mitigations.
-(Use of a positive boolean value is invalid.)
-
-If Xen was compiled with INDIRECT\_THUNK support, `thunk=` can be used to
-select which of the thunks gets patched into the `__x86_indirect_thunk_%reg`
-locations.  The default thunk is `retpoline` (generally preferred for Intel
-hardware), with the alternatives being `jmp` (a `jmp *%reg` gadget, minimal
-overhead), and `lfence` (an `lfence; jmp *%reg` gadget, preferred for AMD).
-
-On hardware supporting IBRS, the `ibrs=` option can be used to force or
-prevent Xen using the feature itself.  If Xen is not using IBRS itself,
-functionality is still set up so IBRS can be virtualised for guests.
-
-On hardware supporting IBPB, the `ibpb=` option can be used to prevent Xen
-from issuing Branch Prediction Barriers on vcpu context switches.
-
-The `rsb=`, `rsb_vmexit=` and `rsb_native=` options can be used to control
-when the RSB gets overwritten.  The former control all RSB overwriting, while
-the latter two can be used to fine tune overwriting on from HVM context, and
-an entry from a native (PV or Xen) context.
-
 ### clocksource (x86)
 > `= pit | hpet | acpi | tsc`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index b5e77bd..a36bcef 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -59,63 +59,6 @@ paddr_t __read_mostly l1tf_addr_mask, __read_mostly l1tf_safe_maddr;
 static bool __initdata cpu_has_bug_l1tf;
 static unsigned int __initdata l1d_maxphysaddr;
 
-static int __init parse_bti(const char *s)
-{
-    const char *ss;
-    int val, rc = 0;
-
-    do {
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        val = parse_bool(s, ss);
-        if ( !val )
-        {
-            opt_thunk = THUNK_JMP;
-            opt_ibrs = 0;
-            opt_ibpb = false;
-            opt_rsb_pv = false;
-            opt_rsb_hvm = false;
-        }
-        else if ( val > 0 )
-            rc = -EINVAL;
-        else if ( !strncmp(s, "thunk=", 6) )
-        {
-            s += 6;
-
-            if ( !strncmp(s, "retpoline", ss - s) )
-                opt_thunk = THUNK_RETPOLINE;
-            else if ( !strncmp(s, "lfence", ss - s) )
-                opt_thunk = THUNK_LFENCE;
-            else if ( !strncmp(s, "jmp", ss - s) )
-                opt_thunk = THUNK_JMP;
-            else
-                rc = -EINVAL;
-        }
-        else if ( (val = parse_boolean("ibrs", s, ss)) >= 0 )
-            opt_ibrs = val;
-        else if ( (val = parse_boolean("ibpb", s, ss)) >= 0 )
-            opt_ibpb = val;
-        else if ( (val = parse_boolean("rsb_native", s, ss)) >= 0 )
-            opt_rsb_pv = val;
-        else if ( (val = parse_boolean("rsb_vmexit", s, ss)) >= 0 )
-            opt_rsb_hvm = val;
-        else if ( (val = parse_boolean("rsb", s, ss)) >= 0 )
-        {
-            opt_rsb_pv = val;
-            opt_rsb_hvm = val;
-        }
-        else
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
-
-    return rc;
-}
-custom_param("bti", parse_bti);
-
 static int __init parse_spec_ctrl(const char *s)
 {
     const char *ss;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
  2018-12-03 16:18 ` [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:21   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features Andrew Cooper
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This appears to be a vestigial remnent of an old version of the
XSA-254/Spectre series, and has never been used.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/cpufeatures.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index f2a1fa1..0c06274 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -25,7 +25,6 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
 XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
 XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
 XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
-XEN_CPUFEATURE(XEN_IBPB,        (FSCAPINTS+0)*32+15) /* IBRSB || IBPB */
 XEN_CPUFEATURE(SC_MSR_PV,       (FSCAPINTS+0)*32+16) /* MSR_SPEC_CTRL used by Xen for PV */
 XEN_CPUFEATURE(SC_MSR_HVM,      (FSCAPINTS+0)*32+17) /* MSR_SPEC_CTRL used by Xen for HVM */
 XEN_CPUFEATURE(SC_RSB_PV,       (FSCAPINTS+0)*32+18) /* RSB overwrite needed for PV */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
  2018-12-03 16:18 ` [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option Andrew Cooper
  2018-12-03 16:18 ` [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:28   ` Jan Beulich
  2018-12-06 12:52   ` Wei Liu
  2018-12-03 16:18 ` [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support Andrew Cooper
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

For gen-cpuid.py, fix a comment describing self.names, and generate the
reverse mapping in self.values.  Write out INIT_FEATURE_NAMES which maps a
string name to a bit position.

For parse_cpuid(), introduce a slightly fuzzy strcmp() to accept changes in
punctuation, and perform a binary search over INIT_FEATURE_NAMES.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

Slightly RFC, because I'm not entirely certain if this is a good idea or not.
---
 xen/arch/x86/cpuid.c   | 91 ++++++++++++++++++++++++++++++++++++--------------
 xen/tools/gen-cpuid.py | 22 ++++++++++--
 2 files changed, 86 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 0591a7d..eb86a86 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -18,9 +18,34 @@ static const uint32_t hvm_shadow_featuremask[] = INIT_HVM_SHADOW_FEATURES;
 static const uint32_t hvm_hap_featuremask[] = INIT_HVM_HAP_FEATURES;
 static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
 
+/*
+ * Works like strcmp(), but customised specifically for this usecase.  'name'
+ * is a NUL terminated string.  's' is considered to match 'name' if the NUL
+ * terminator of 'name' match punctiation in 's'.
+ */
+static int __init cpuid_name_cmp(const char *s, const char *name)
+{
+    int res;
+
+    /* Basic strcmp(). */
+    for ( ; *name != '\0'; ++name, ++s )
+        if ( (res = (*s - *name)) != 0 )
+            break;
+
+    /* If a failure, but only because '=' or ',', override to success. */
+    if ( res && *name == '\0' && (*s == '=' || *s == ',') )
+        res = 0;
+
+    return res;
+}
+
 static int __init parse_xen_cpuid(const char *s)
 {
-    const char *ss;
+    static const struct feature {
+        const char *name;
+        unsigned int bit;
+    } features[] __initconst = INIT_FEATURE_NAMES, *lhs, *mid, *rhs;
+    const char *ss, *feat;
     int val, rc = 0;
 
     do {
@@ -28,32 +53,48 @@ static int __init parse_xen_cpuid(const char *s)
         if ( !ss )
             ss = strchr(s, '\0');
 
-        if ( (val = parse_boolean("ibpb", s, ss)) >= 0 )
-        {
-            if ( !val )
-                setup_clear_cpu_cap(X86_FEATURE_IBPB);
-        }
-        else if ( (val = parse_boolean("ibrsb", s, ss)) >= 0 )
-        {
-            if ( !val )
-                setup_clear_cpu_cap(X86_FEATURE_IBRSB);
-        }
-        else if ( (val = parse_boolean("stibp", s, ss)) >= 0 )
-        {
-            if ( !val )
-                setup_clear_cpu_cap(X86_FEATURE_STIBP);
-        }
-        else if ( (val = parse_boolean("l1d-flush", s, ss)) >= 0 )
-        {
-            if ( !val )
-                setup_clear_cpu_cap(X86_FEATURE_L1D_FLUSH);
-        }
-        else if ( (val = parse_boolean("ssbd", s, ss)) >= 0 )
+        /* Skip the 'no-' prefix for name comparisons. */
+        feat = s;
+        if ( strncmp(s, "no-", 3) == 0 )
+            feat += 3;
+
+        /* (Re)initalise lhs and rhs for binary search. */
+        lhs = features;
+        rhs = features + ARRAY_SIZE(features);
+
+        while ( lhs < rhs )
         {
-            if ( !val )
-                setup_clear_cpu_cap(X86_FEATURE_SSBD);
+            int res;
+
+            mid = lhs + (rhs - lhs) / 2;
+            res = cpuid_name_cmp(feat, mid->name);
+
+            if ( res < 0 )
+            {
+                rhs = mid;
+                continue;
+            }
+            if ( res > 0 )
+            {
+                lhs = mid + 1;
+                continue;
+            }
+
+            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
+            {
+                if ( !val )
+                    setup_clear_cpu_cap(mid->bit);
+                mid = NULL;
+            }
+
+            break;
         }
-        else
+
+        /*
+         * Mid being NULL means that the name search failed, or that
+         * parse_boolean() failed.
+         */
+        if ( mid )
             rc = -EINVAL;
 
         s = ss + 1;
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index 27569bd..002de6f 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -19,7 +19,8 @@ class State(object):
         self.output = open_file_or_fd(output, "w", 2)
 
         # State parsed from input
-        self.names = {} # Name => value mapping
+        self.names = {}  # Value => Name mapping
+        self.values = {} # Name => Value mapping
         self.raw_special = set()
         self.raw_pv = set()
         self.raw_hvm_shadow = set()
@@ -76,8 +77,9 @@ def parse_definitions(state):
             this_name = name
         setattr(this, this_name, val)
 
-        # Construct a reverse mapping of value to name
+        # Construct forward and reverse mappings between name and value
         state.names[val] = name
+        state.values[name.lower().replace("_", "-")] = val
 
         for a in attr:
 
@@ -393,6 +395,22 @@ def write_results(state):
     state.output.write(
 """}
 
+#define INIT_FEATURE_NAMES { \\
+""")
+
+    try:
+        _tmp = state.values.iteritems()
+    except AttributeError:
+        _tmp = state.values.items()
+
+    for name, bit in sorted(_tmp):
+        state.output.write(
+            '    { "%s", %sU },\\\n' % (name, bit)
+            )
+
+    state.output.write(
+"""}
+
 """)
 
     for idx, text in enumerate(state.bitfields):
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:06   ` Woods, Brian
  2018-12-05 16:39   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

At the time of writing, the spec is available from:

  https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf

Future hardware (Zen v2) is expect to have hardware MSR_SPEC_CTRL support,
including SPEC_CTRL.SSBD, and with the expectation that this will be directly
passed through to guests for performance.

On currently released hardware, the only mechanism available is the legacy
LS_CFG option, and this is very expensive to use.  Furthermore, emulating
MSR_SPEC_CTRL via interception is prohibitively expensive, as certain OSes use
the write-discard flexibility to simplify their entry/exit logic.

As an alternative, MSR_VIRT_SPEC_CTRL is specified as an architectural control
(with semantics equivilent to MSR_SPEC_CTRL) which is provided by the
hypervisor.  This abstracts away the model-specific details of the LS_CFG
mechanism, which allows migration safety to be retained.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 tools/libxl/libxl_cpuid.c                   | 5 +++++
 tools/misc/xen-cpuid.c                      | 4 ++++
 xen/arch/x86/spec_ctrl.c                    | 7 ++++++-
 xen/include/asm-x86/msr-index.h             | 3 +++
 xen/include/public/arch-x86/cpufeatureset.h | 5 +++++
 5 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_cpuid.c b/tools/libxl/libxl_cpuid.c
index 52e16c2..51eb41c 100644
--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -245,6 +245,11 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
         {"invtsc",       0x80000007, NA, CPUID_REG_EDX,  8,  1},
 
         {"ibpb",         0x80000008, NA, CPUID_REG_EBX, 12,  1},
+        {"amd_ibrs",     0x80000008, NA, CPUID_REG_EBX, 14,  1},
+        {"amd_stibp",    0x80000008, NA, CPUID_REG_EBX, 15,  1},
+        {"amd_ssbd",     0x80000008, NA, CPUID_REG_EBX, 24,  1},
+        {"virt_sc_ssbd", 0x80000008, NA, CPUID_REG_EBX, 25,  1},
+        {"amd_ssb_no",   0x80000008, NA, CPUID_REG_EBX, 26,  1},
         {"nc",           0x80000008, NA, CPUID_REG_ECX,  0,  8},
         {"apicidsize",   0x80000008, NA, CPUID_REG_ECX, 12,  4},
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 6e7ca8b..efb789e 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -140,6 +140,10 @@ static const char *str_e8b[32] =
     [ 0] = "clzero",
 
     [12] = "ibpb",
+    [14] = "amd_ibrs",      [15] = "amd_stibp",
+
+    [24] = "amd_ssbd",      [25] = "virt_sc_ssbd",
+    [26] = "amd_ssb_no",
 };
 
 static const char *str_7d0[32] =
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index a36bcef..af92866 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -224,12 +224,17 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_SSBD))  ? " SSBD"      : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB))  ? " IBPB"      : "",
+           (e8b  & cpufeat_mask(X86_FEATURE_AMD_IBRS)) ? " IBRS"   : "",
+           (e8b  & cpufeat_mask(X86_FEATURE_AMD_STIBP)) ? " AMD_STIBP" : "",
+           (e8b  & cpufeat_mask(X86_FEATURE_AMD_SSBD)) ? " AMD_IBPB" : "",
+           (e8b  & cpufeat_mask(X86_FEATURE_VIRT_SC_SSBD)) ? " VIRT_SSBD" : "",
+           (e8b  & cpufeat_mask(X86_FEATURE_AMD_SSB_NO)) ? " SSB_NO" : "",
            (caps & ARCH_CAPABILITIES_IBRS_ALL)      ? " IBRS_ALL"  : "",
            (caps & ARCH_CAPABILITIES_RDCL_NO)       ? " RDCL_NO"   : "",
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 3faed27..56346d7 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -202,6 +202,9 @@
 #define MSR_K8_VM_CR			0xc0010114
 #define MSR_K8_VM_HSAVE_PA		0xc0010117
 
+/* Bit layout expected to match MSR_SPEC_CTRL */
+#define MSR_VIRT_SPEC_CTRL		0xc001011f
+
 #define MSR_AMD_FAM15H_EVNTSEL0		0xc0010200
 #define MSR_AMD_FAM15H_PERFCTR0		0xc0010201
 #define MSR_AMD_FAM15H_EVNTSEL1		0xc0010202
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 6c82816..8f80195 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,6 +238,11 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
+XEN_CPUFEATURE(AMD_IBRS,      8*32+14) /*   MSR_SPEC_CTRL.IBRS available */
+XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*   MSR_SPEC_CTRL.STIBP available */
+XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*   MSR_SPEC_CTRL.SSBD available */
+XEN_CPUFEATURE(VIRT_SC_SSBD,  8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD available. */
+XEN_CPUFEATURE(AMD_SSB_NO,    8*32+26) /*   Hardware not vulnerable to SSB */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */
 XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A  AVX512 Neural Network Instructions */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:15   ` Woods, Brian
                     ` (2 more replies)
  2018-12-03 16:18 ` [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h Andrew Cooper
                   ` (4 subsequent siblings)
  9 siblings, 3 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

Introduce a new synthetic LEGACY_SSBD feature and set it if we find
VIRT_SPEC_CTRL offered by our hypervisor, or if we find a working bit in an
LS_CFG register.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/x86/cpu/amd.c            | 59 +++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/spec_ctrl.c          |  3 +-
 xen/include/asm-x86/cpufeature.h  |  6 ++++
 xen/include/asm-x86/cpufeatures.h |  1 +
 4 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index c790416..897c060 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -362,6 +362,62 @@ static void __init noinline amd_init_levelling(void)
 		ctxt_switch_masking = amd_ctxt_switch_masking;
 }
 
+/* Cached once on boot. */
+static uint64_t __read_mostly ls_cfg_base, __read_mostly ls_cfg_ssbd_mask;
+
+static void __init noinline amd_probe_legacy_ssbd(void)
+{
+	uint64_t new;
+
+	/*
+	 * Search for mechanisms of controlling Memory Disambiguation.
+	 *
+	 * If the CPU reports that it is fixed, there is nothing to do.  If we
+	 * have an architectural MSR_SPEC_CTRL.SSBD control, leave everything
+	 * to the common code.
+	 */
+	if (cpu_has_amd_ssb_no || cpu_has_amd_ssbd)
+		return;
+
+	/* Use MSR_VIRT_SPEC_CTRL if our hypervisor offers it. */
+	if (cpu_has_virt_sc_ssbd) {
+		setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);
+		return;
+	}
+
+	/* Probe for LS_CFG settings. */
+	switch (boot_cpu_data.x86) {
+	default: return; /* No known LS_CFG settings. */
+	case 0x15: ls_cfg_ssbd_mask = 1ull << 54; break;
+	case 0x16: ls_cfg_ssbd_mask = 1ull << 33; break;
+	case 0x17: ls_cfg_ssbd_mask = 1ull << 10; break;
+	}
+
+	/*
+	 * MSR_AMD64_LS_CFG isn't architectural, and may not be virtualised
+	 * fully.  Check that we can actually flip the bit before concluding
+	 * that LS_CFG is available for use.
+	 */
+	if (rdmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base) ||
+	    wrmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base ^ ls_cfg_ssbd_mask))
+		return;
+
+	rdmsrl(MSR_AMD64_LS_CFG, new);
+	if (new != (ls_cfg_base ^ ls_cfg_ssbd_mask))
+		return;
+
+	/*
+	 * Leave ls_cfg_base with the bit clear.  This is Xen's overall
+	 * default, and it simplifies the context switch logic.
+	 */
+	ls_cfg_base &= ~ls_cfg_ssbd_mask;
+	if ((new != ls_cfg_base) && wrmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base))
+		return;
+
+	/* LS_CFG appears to work fully.  Lets choose to use it. */
+	setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);
+}
+
 /*
  * Check for the presence of an AMD erratum. Arguments are defined in amd.h 
  * for each known erratum. Return 1 if erratum is found.
@@ -603,6 +659,9 @@ static void init_amd(struct cpuinfo_x86 *c)
 				  c->x86_capability);
 	}
 
+	if (c == &boot_cpu_data)
+		amd_probe_legacy_ssbd();
+
 	/*
 	 * If the user has explicitly chosen to disable Memory Disambiguation
 	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index af92866..40a71e2 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -260,7 +260,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            thunk == THUNK_JMP       ? "JMP" : "?",
            !boot_cpu_has(X86_FEATURE_IBRSB)          ? "No" :
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
-           !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
+           !boot_cpu_has(X86_FEATURE_SSBD)           ?
+           cpu_has_legacy_ssbd                       ? " LEGACY_SSBD" : "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "");
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index c2b0f6a..2923003 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -110,11 +110,17 @@
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
 
+/* CPUID level 0x80000008.ebx */
+#define cpu_has_amd_ssbd        boot_cpu_has(X86_FEATURE_AMD_SSBD)
+#define cpu_has_virt_sc_ssbd    boot_cpu_has(X86_FEATURE_VIRT_SC_SSBD)
+#define cpu_has_amd_ssb_no      boot_cpu_has(X86_FEATURE_AMD_SSB_NO)
+
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
 #define cpu_has_cpuid_faulting  boot_cpu_has(X86_FEATURE_CPUID_FAULTING)
 #define cpu_has_aperfmperf      boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_lfence_dispatch boot_cpu_has(X86_FEATURE_LFENCE_DISPATCH)
+#define cpu_has_legacy_ssbd     boot_cpu_has(X86_FEATURE_LEGACY_SSBD)
 #define cpu_has_xen_lbr         boot_cpu_has(X86_FEATURE_XEN_LBR)
 
 enum _cache_type {
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index 0c06274..2090613 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
 XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
 XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
 XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
+XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */
 XEN_CPUFEATURE(SC_MSR_PV,       (FSCAPINTS+0)*32+16) /* MSR_SPEC_CTRL used by Xen for PV */
 XEN_CPUFEATURE(SC_MSR_HVM,      (FSCAPINTS+0)*32+17) /* MSR_SPEC_CTRL used by Xen for HVM */
 XEN_CPUFEATURE(SC_RSB_PV,       (FSCAPINTS+0)*32+18) /* RSB overwrite needed for PV */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 16:38   ` Woods, Brian
  2018-12-05 16:57   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface Andrew Cooper
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Signed-off-by: Brian Woods, Wei Liu, Jan Beulich,
	Roger Pau Monné

The need for per-core resources is a property of Fam17h hardware.  The
mechanim for calculating / allocating space is all a bit horrible, but is the
best which can be done at this point.  See the code comments for details.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Parts based on an earlier patch by Brian
Signed-off-by: Signed-off-by: Brian Woods <brian.woods@amd.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/x86/cpu/amd.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 897c060..ea10dbd 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
 }
 
 /*
+ * This is all a gross hack, but Xen really doesn't have flexible-enough
+ * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
+ * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
+ * spinlock to synchronise updates of the MSR.
+ *
+ * We can't use per-cpu state because taking one CPU offline would free state
+ * under the feet of another.  Ideally, we'd allocate memory on the AP boot
+ * path, but by the time the sibling information is calculated sufficiently
+ * for us to locate the per-core state, it's too late to fail the AP boot.
+ *
+ * We also can't afford to end up in a heterogeneous scenario with some CPUs
+ * unable to safely use LS_CFG.
+ *
+ * Therefore, we have to allocate for the worse-case scenario, which is
+ * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
+ * off, as this is fractionally better than failing to boot.
+ */
+static struct ssbd_ls_cfg {
+	spinlock_t lock;
+	unsigned int disable_count;
+} *ssbd_ls_cfg[4];
+static unsigned int ssbd_max_cores;
+
+static int __init amd_init_legacy_ssbd(void)
+{
+	const struct cpuinfo_x86 *c = &boot_cpu_data;
+	unsigned int socket, core;
+
+	/* No legacy SSBD interface?  Nothing to do. */
+	if (!cpu_has_legacy_ssbd)
+		return 0;
+
+	/*
+	 * No setup required for:
+	 *  - Using MSR_VIRT_SPEC_CTRL
+	 *  - Pre-Fam17h hardware
+	 *  - Fam17h with SMT disabled
+	 */
+	if (cpu_has_virt_sc_ssbd || c->x86 < 0x17 || c->x86_num_siblings == 1)
+		return 0;
+
+	/*
+	 * One could be forgiven for thinking that c->x86_max_cores is the
+	 * correct value to use here.
+	 *
+	 * However, that value is derived from the current configuration, and
+	 * c->cpu_core_id is sparse on all but the top end CPUs.  Derive
+	 * max_cpus from ApicIdCoreIdSize which will cover any sparseness.
+	 */
+	if (c->extended_cpuid_level >= 0x80000008) {
+		ssbd_max_cores = 1u << MASK_EXTR(cpuid_ecx(0x80000008), 0xf000);
+		ssbd_max_cores /= c->x86_num_siblings;
+	}
+
+	if (ssbd_max_cores == 0) {
+		printk(XENLOG_WARNING
+		       "Failed to calculate max_cores. Disabling LEGACY_SSBD\n");
+		setup_clear_cpu_cap(X86_FEATURE_LEGACY_SSBD);
+
+		return 0;
+	}
+
+	/*
+	 * Allocate the worse case cores-per-socket worth of state, as
+	 * described by hardware.
+	 */
+	for (socket = 0; socket < ARRAY_SIZE(ssbd_ls_cfg); ++socket) {
+		ssbd_ls_cfg[socket] =
+			xzalloc_array(struct ssbd_ls_cfg, ssbd_max_cores);
+
+		if (!ssbd_ls_cfg[socket]) {
+			/*
+			 * Memory allocation failure.  Backtrack and pretend
+			 * that the LEGACY_SSBD interface isn't available.
+			 */
+			printk(XENLOG_WARNING
+			       "Out of memory.  Disabling LEGACY_SSBD\n");
+			setup_clear_cpu_cap(X86_FEATURE_LEGACY_SSBD);
+
+			return 0;
+		}
+
+		for (core = 0; core < ssbd_max_cores; ++core)
+			spin_lock_init(&ssbd_ls_cfg[socket][core].lock);
+	}
+
+	return 0;
+}
+presmp_initcall(amd_init_legacy_ssbd);
+
+/*
  * Check for the presence of an AMD erratum. Arguments are defined in amd.h 
  * for each known erratum. Return 1 if erratum is found.
  */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (5 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 20:27   ` Woods, Brian
  2018-12-06 10:51   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests Andrew Cooper
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

It is critical that MSR_AMD64_LS_CFG is never modified outside of this
function, to avoid trampling on sibling settings.

For now, pass in NULL from the boot paths and just set Xen's default.  Later
patches will plumb in guest choices.  This now supercedes the older code which
wrote to MSR_AMD64_LS_CFG once during boot.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/x86/cpu/amd.c          | 89 ++++++++++++++++++++++++++++++++---------
 xen/arch/x86/smpboot.c          |  3 ++
 xen/include/asm-x86/processor.h |  2 +
 3 files changed, 75 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index ea10dbd..3a8ead9 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -442,6 +442,74 @@ static struct ssbd_ls_cfg {
 } *ssbd_ls_cfg[4];
 static unsigned int ssbd_max_cores;
 
+/*
+ * Must only be called when the LEGACY_SSBD is in used.  Called with NULL to
+ * switch back to Xen's default value.
+ */
+void amd_ctxt_switch_legacy_ssbd(const struct vcpu *next)
+{
+	static DEFINE_PER_CPU(bool, ssbd);
+	bool *this_ssbd = &this_cpu(ssbd);
+	bool disable = opt_ssbd;
+	struct cpuinfo_x86 *c = &current_cpu_data;
+	unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
+	struct ssbd_ls_cfg *cfg;
+	uint64_t val;
+
+	ASSERT(cpu_has_legacy_ssbd);
+
+	/*
+	 * Update hardware lazily, as these MSRs are expensive.  However, on
+	 * the boot paths which pass NULL, force a write to set a consistent
+	 * initial state.
+	 */
+	if (*this_ssbd == disable && next)
+		return;
+
+	if (cpu_has_virt_sc_ssbd) {
+		wrmsrl(MSR_VIRT_SPEC_CTRL,
+		       disable ? SPEC_CTRL_SSBD : 0);
+		goto done;
+	}
+
+	val = ls_cfg_base | (disable ? ls_cfg_ssbd_mask : 0);
+
+	if (c->x86 < 0x17 || c->x86_num_siblings == 1) {
+		/* No threads to be concerned with. */
+		wrmsrl(MSR_AMD64_LS_CFG, val);
+		goto done;
+	}
+
+	/* Check that we won't overflow the worse-case allocation. */
+	BUG_ON(socket >= ARRAY_SIZE(ssbd_ls_cfg));
+	BUG_ON(core   >= ssbd_max_cores);
+
+	cfg = &ssbd_ls_cfg[socket][core];
+
+	if (disable) {
+		spin_lock(&cfg->lock);
+
+		/* First sibling to disable updates hardware. */
+		if (!cfg->disable_count)
+			wrmsrl(MSR_AMD64_LS_CFG, val);
+		cfg->disable_count++;
+
+		spin_unlock(&cfg->lock);
+	} else {
+		spin_lock(&cfg->lock);
+
+		/* Last sibling to enable updates hardware. */
+		cfg->disable_count--;
+		if (!cfg->disable_count)
+			wrmsrl(MSR_AMD64_LS_CFG, val);
+
+		spin_unlock(&cfg->lock);
+	}
+
+ done:
+	*this_ssbd = disable;
+}
+
 static int __init amd_init_legacy_ssbd(void)
 {
 	const struct cpuinfo_x86 *c = &boot_cpu_data;
@@ -505,6 +573,8 @@ static int __init amd_init_legacy_ssbd(void)
 			spin_lock_init(&ssbd_ls_cfg[socket][core].lock);
 	}
 
+	amd_ctxt_switch_legacy_ssbd(NULL);
+
 	return 0;
 }
 presmp_initcall(amd_init_legacy_ssbd);
@@ -753,25 +823,6 @@ static void init_amd(struct cpuinfo_x86 *c)
 	if (c == &boot_cpu_data)
 		amd_probe_legacy_ssbd();
 
-	/*
-	 * If the user has explicitly chosen to disable Memory Disambiguation
-	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
-	 */
-	if (opt_ssbd) {
-		int bit = -1;
-
-		switch (c->x86) {
-		case 0x15: bit = 54; break;
-		case 0x16: bit = 33; break;
-		case 0x17: bit = 10; break;
-		}
-
-		if (bit >= 0 && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
-			value |= 1ull << bit;
-			wrmsr_safe(MSR_AMD64_LS_CFG, value);
-		}
-	}
-
 	/* MFENCE stops RDTSC speculation */
 	if (!cpu_has_lfence_dispatch)
 		__set_bit(X86_FEATURE_MFENCE_RDTSC, c->x86_capability);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 567cece..7d54201 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -376,6 +376,9 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    if ( cpu_has_legacy_ssbd )
+        amd_ctxt_switch_legacy_ssbd(NULL);
+
     if ( xen_guest )
         hypervisor_ap_setup();
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index df01ae3..e8d29a7 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -151,6 +151,8 @@ extern bool probe_cpuid_faulting(void);
 extern void ctxt_switch_levelling(const struct vcpu *next);
 extern void (*ctxt_switch_masking)(const struct vcpu *next);
 
+extern void amd_ctxt_switch_legacy_ssbd(const struct vcpu *next);
+
 extern bool_t opt_cpu_info;
 extern u32 cpuid_ext_features;
 extern u64 trampoline_misc_enable_off;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (6 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-04 21:35   ` Woods, Brian
  2018-12-06 10:55   ` Jan Beulich
  2018-12-03 16:18 ` [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests Andrew Cooper
  2018-12-03 16:24 ` [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Jan Beulich
  9 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

The semantics of MSR_VIRT_SPEC_CTRL are that unknown bits are write-discard
and read as zero.  Only VIRT_SPEC_CTRL.SSBD is defined at the moment.

To facilitate making this per-guest, the legacy SSBD state needs context
switching between vcpus.  amd_ctxt_switch_legacy_ssbd() is updated to take the
vcpus setting into account.  Furthermore, the guests chosen value needs
preserving across migrate.

This marks a subtle change in how `ssbd=` behaves.  If Xen wishes SSBD to be
asserted, it remains set in hardware all the time.  In the default case of Xen
wishing SSBD not to be asserted, the value set in hardware is the guests
choice.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 docs/misc/xen-command-line.markdown | 13 +++++++++----
 xen/arch/x86/cpu/amd.c              |  4 +++-
 xen/arch/x86/domain.c               |  2 ++
 xen/arch/x86/domctl.c               |  1 +
 xen/arch/x86/hvm/hvm.c              |  1 +
 xen/arch/x86/msr.c                  | 17 +++++++++++++++++
 xen/include/asm-x86/msr.h           |  9 +++++++++
 7 files changed, 42 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 764f33a..696744e 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1904,10 +1904,15 @@ option can be used to force (the default) or prevent Xen from issuing branch
 prediction barriers on vcpu context switches.
 
 On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=`
-option can be used to force or prevent Xen using the feature itself.  On AMD
-hardware, this is a global option applied at boot, and not virtualised for
-guest use.  On Intel hardware, the feature is virtualised for guests,
-independently of Xen's choice of setting.
+option can be used to force or prevent Xen using the feature itself.
+
+* On hardware supporting SSBD in MSR\_SPEC\_CTRL, Xen maintains distinct guest
+  and host state, and will virtualise SSBD for guests.
+
+* On some AMD hardware where only legacy LS\_CFG is available, Xen offers the
+  MSR\_VIRT\_SPEC\_CTRL interface to guests, but is unable to maintain
+  distinct guest and host state.  The value set in hardware is the logical OR
+  of the Xen and guest settings.
 
 On all hardware, the `eager-fpu=` option can be used to force or prevent Xen
 from using fully eager FPU context switches.  This is currently implemented as
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3a8ead9..c766497 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -450,7 +450,9 @@ void amd_ctxt_switch_legacy_ssbd(const struct vcpu *next)
 {
 	static DEFINE_PER_CPU(bool, ssbd);
 	bool *this_ssbd = &this_cpu(ssbd);
-	bool disable = opt_ssbd;
+	bool disable = opt_ssbd ?:
+            (next && !is_idle_vcpu(next) &&
+             (next->arch.msrs->virt_spec_ctrl & SPEC_CTRL_SSBD));
 	struct cpuinfo_x86 *c = &current_cpu_data;
 	unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
 	struct ssbd_ls_cfg *cfg;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b4d5948..d5df67e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1801,6 +1801,8 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
             load_segments(next);
 
         ctxt_switch_levelling(next);
+        if ( cpu_has_legacy_ssbd )
+            amd_ctxt_switch_legacy_ssbd(next);
 
         if ( opt_ibpb && !is_idle_domain(nextd) )
         {
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index aa8ad19..cc9f8cd 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1274,6 +1274,7 @@ long arch_do_domctl(
         static const uint32_t msrs_to_send[] = {
             MSR_SPEC_CTRL,
             MSR_INTEL_MISC_FEATURES_ENABLES,
+            MSR_VIRT_SPEC_CTRL,
             MSR_AMD64_DR0_ADDRESS_MASK,
             MSR_AMD64_DR1_ADDRESS_MASK,
             MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e2e4204..a2c3533 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1305,6 +1305,7 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
 static const uint32_t msrs_to_send[] = {
     MSR_SPEC_CTRL,
     MSR_INTEL_MISC_FEATURES_ENABLES,
+    MSR_VIRT_SPEC_CTRL,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 76cb6ef..84e97aa 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -183,6 +183,13 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
                                    ARRAY_SIZE(msrs->dr_mask))];
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_sc_ssbd )
+            goto gp_fault;
+
+        *val = msrs->virt_spec_ctrl;
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
@@ -323,6 +330,16 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsrl(msr, val);
         break;
 
+    case MSR_VIRT_SPEC_CTRL:
+        if ( !cp->extd.virt_sc_ssbd )
+            goto gp_fault;
+
+        msrs->virt_spec_ctrl = (val & SPEC_CTRL_SSBD);
+
+        if ( v == curr )
+            amd_ctxt_switch_legacy_ssbd(curr);
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 05d905b..51d1bed 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -289,6 +289,15 @@ struct vcpu_msrs
     } misc_features_enables;
 
     /*
+     * 0xc001011f - MSR_VIRT_SPEC_CTRL
+     *
+     * For the subset of bits implemented, functionality shared with
+     * MSR_SPEC_CTRL, but the MSR is expected to be intercepted.  For
+     * compatibility, unsupported bits are write-discard/read-as-zero.
+     */
+    uint32_t virt_spec_ctrl;
+
+    /*
      * 0xc00110{27,19-1b} MSR_AMD64_DR{0-3}_ADDRESS_MASK
      *
      * Loaded into hardware for guests which have active %dr7 settings.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (7 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests Andrew Cooper
@ 2018-12-03 16:18 ` Andrew Cooper
  2018-12-06 10:57   ` Jan Beulich
  2018-12-03 16:24 ` [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Jan Beulich
  9 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Brian Woods, Jan Beulich, Roger Pau Monné

With all other infrastructure now in place, offer X86_FEATURE_VIRT_SC_SSBD to
guests in cases where Xen thinks it has a working LEGACY_SSBD interface.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/x86/cpuid.c                        | 10 ++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index eb86a86..4ff1ea2 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -370,6 +370,16 @@ static void __init guest_common_feature_adjustments(uint32_t *fs)
      */
     if ( host_cpuid_policy.feat.ibrsb )
         __set_bit(X86_FEATURE_IBPB, fs);
+
+    /*
+     * In practice, we can offer VIRT_SC_SSBD on any hardware with legacy_ssbd
+     * or msr_spec_ctrl, but until we've got a proper split between default
+     * and max policies, avoid offering it in cases where the guest shouldn't
+     * be using it.
+     */
+    __clear_bit(X86_FEATURE_VIRT_SC_SSBD, fs);
+    if ( cpu_has_legacy_ssbd )
+        __set_bit(X86_FEATURE_VIRT_SC_SSBD, fs);
 }
 
 static void __init calculate_pv_max_policy(void)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 8f80195..eb298cd 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -241,7 +241,7 @@ XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by
 XEN_CPUFEATURE(AMD_IBRS,      8*32+14) /*   MSR_SPEC_CTRL.IBRS available */
 XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*   MSR_SPEC_CTRL.STIBP available */
 XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*   MSR_SPEC_CTRL.SSBD available */
-XEN_CPUFEATURE(VIRT_SC_SSBD,  8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD available. */
+XEN_CPUFEATURE(VIRT_SC_SSBD,  8*32+25) /*A  MSR_VIRT_SPEC_CTRL.SSBD available. */
 XEN_CPUFEATURE(AMD_SSB_NO,    8*32+26) /*   Hardware not vulnerable to SSB */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
                   ` (8 preceding siblings ...)
  2018-12-03 16:18 ` [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests Andrew Cooper
@ 2018-12-03 16:24 ` Jan Beulich
  2018-12-03 16:31   ` Andrew Cooper
  9 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-03 16:24 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> This is a lingering TODO item from XSA-263.  It adds support AMD's
> MSR_VIRT_SPEC_CTRL interface, and changes Xen's "boot time global" SSBD
> setting into a per-vcpu setting.
> 
> This can be found on:
>   git://xenbits.xen.org/people/andrewcoop/xen.git xen-virt-spec-ctrl-v1
> 
> The start of the series is some cleanup.  It then teaches Xen to recognise the
> available interfaces (including MSR_VIRT_SPEC_CTRL from a hypervisor), then
> how to safely context switch the per-core LS_CFG on Fam17h, an finally to
> expose support to guests.
> 
> I've got some further MSR work coming because we have to fix the
> default-leakiness of MSRs in this range, because a guest becomes unsafe to
> migrate as soon as it reads any of the pipeline control MSRs.

I've seen you mention this elsewhere, but I'm still unclear about
the "why" part here.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-03 16:24 ` [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Jan Beulich
@ 2018-12-03 16:31   ` Andrew Cooper
  2018-12-04  9:45     ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-03 16:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 03/12/2018 16:24, Jan Beulich wrote:
>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>> This is a lingering TODO item from XSA-263.  It adds support AMD's
>> MSR_VIRT_SPEC_CTRL interface, and changes Xen's "boot time global" SSBD
>> setting into a per-vcpu setting.
>>
>> This can be found on:
>>   git://xenbits.xen.org/people/andrewcoop/xen.git xen-virt-spec-ctrl-v1
>>
>> The start of the series is some cleanup.  It then teaches Xen to recognise the
>> available interfaces (including MSR_VIRT_SPEC_CTRL from a hypervisor), then
>> how to safely context switch the per-core LS_CFG on Fam17h, an finally to
>> expose support to guests.
>>
>> I've got some further MSR work coming because we have to fix the
>> default-leakiness of MSRs in this range, because a guest becomes unsafe to
>> migrate as soon as it reads any of the pipeline control MSRs.
> I've seen you mention this elsewhere, but I'm still unclear about
> the "why" part here.

Because the existence (or not) are model specific, the details read are
non-architectural, not always the same on minor variations of the same
platform, and definitely not the same across different models.

I'm lead to believe that is only blind luck that the LFENCE serialising
bit in DE_CFG is in the same position on all currently applicable
hardware, and there are no plans to make this architectural.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-03 16:31   ` Andrew Cooper
@ 2018-12-04  9:45     ` Jan Beulich
  2018-12-04 11:26       ` Andrew Cooper
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-04  9:45 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:31, <andrew.cooper3@citrix.com> wrote:
> On 03/12/2018 16:24, Jan Beulich wrote:
>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>> This is a lingering TODO item from XSA-263.  It adds support AMD's
>>> MSR_VIRT_SPEC_CTRL interface, and changes Xen's "boot time global" SSBD
>>> setting into a per-vcpu setting.
>>>
>>> This can be found on:
>>>   git://xenbits.xen.org/people/andrewcoop/xen.git xen-virt-spec-ctrl-v1
>>>
>>> The start of the series is some cleanup.  It then teaches Xen to recognise 
> the
>>> available interfaces (including MSR_VIRT_SPEC_CTRL from a hypervisor), then
>>> how to safely context switch the per-core LS_CFG on Fam17h, an finally to
>>> expose support to guests.
>>>
>>> I've got some further MSR work coming because we have to fix the
>>> default-leakiness of MSRs in this range, because a guest becomes unsafe to
>>> migrate as soon as it reads any of the pipeline control MSRs.
>> I've seen you mention this elsewhere, but I'm still unclear about
>> the "why" part here.
> 
> Because the existence (or not) are model specific, the details read are
> non-architectural, not always the same on minor variations of the same
> platform, and definitely not the same across different models.

But migration then is only one (albeit perhaps the major) aspect.
CPUID overrides altering e.g. the model number would similarly
be a problem if we were to imply from reads of these MSRs that
"problematic" conclusions would be drawn by the guest.

Otoh I can't see how migration to an identical CPU model system
would become unsafe.

Nor can I see how hiding these MSRs from guests would improve
the situation in this regard: Guests may still draw unwanted
conclusions from not being able to read these MSRs, or reading
all zeros.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-04  9:45     ` Jan Beulich
@ 2018-12-04 11:26       ` Andrew Cooper
  2018-12-04 12:45         ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-04 11:26 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 04/12/2018 09:45, Jan Beulich wrote:
>>>> On 03.12.18 at 17:31, <andrew.cooper3@citrix.com> wrote:
>> On 03/12/2018 16:24, Jan Beulich wrote:
>>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>>> This is a lingering TODO item from XSA-263.  It adds support AMD's
>>>> MSR_VIRT_SPEC_CTRL interface, and changes Xen's "boot time global" SSBD
>>>> setting into a per-vcpu setting.
>>>>
>>>> This can be found on:
>>>>   git://xenbits.xen.org/people/andrewcoop/xen.git xen-virt-spec-ctrl-v1
>>>>
>>>> The start of the series is some cleanup.  It then teaches Xen to recognise 
>> the
>>>> available interfaces (including MSR_VIRT_SPEC_CTRL from a hypervisor), then
>>>> how to safely context switch the per-core LS_CFG on Fam17h, an finally to
>>>> expose support to guests.
>>>>
>>>> I've got some further MSR work coming because we have to fix the
>>>> default-leakiness of MSRs in this range, because a guest becomes unsafe to
>>>> migrate as soon as it reads any of the pipeline control MSRs.
>>> I've seen you mention this elsewhere, but I'm still unclear about
>>> the "why" part here.
>> Because the existence (or not) are model specific, the details read are
>> non-architectural, not always the same on minor variations of the same
>> platform, and definitely not the same across different models.
> But migration then is only one (albeit perhaps the major) aspect.
> CPUID overrides altering e.g. the model number would similarly
> be a problem if we were to imply from reads of these MSRs that
> "problematic" conclusions would be drawn by the guest.
>
> Otoh I can't see how migration to an identical CPU model system
> would become unsafe.

Identical system with identical firmware and configuration - yes we'd
expect them to have identical values in these configuration.

>
> Nor can I see how hiding these MSRs from guests would improve
> the situation in this regard: Guests may still draw unwanted
> conclusions from not being able to read these MSRs, or reading
> all zeros.

I can't help but feel that the observations you've made answer the
question very succinctly.

Of course we can't prevent the guest drawing conclusions from the
absense/presence of the information.  What we can (and must) ensure is
that the information that is available (i.e. a #GP fault) does not have
any details which are specific to the processor that the VM happened to
boot on.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-04 11:26       ` Andrew Cooper
@ 2018-12-04 12:45         ` Jan Beulich
  2018-12-04 13:41           ` Andrew Cooper
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-04 12:45 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 04.12.18 at 12:26, <andrew.cooper3@citrix.com> wrote:
> On 04/12/2018 09:45, Jan Beulich wrote:
>> Nor can I see how hiding these MSRs from guests would improve
>> the situation in this regard: Guests may still draw unwanted
>> conclusions from not being able to read these MSRs, or reading
>> all zeros.
> 
> I can't help but feel that the observations you've made answer the
> question very succinctly.
> 
> Of course we can't prevent the guest drawing conclusions from the
> absense/presence of the information.  What we can (and must) ensure is
> that the information that is available (i.e. a #GP fault) does not have
> any details which are specific to the processor that the VM happened to
> boot on.

But that's the issue: Even #GP on such an MSR access convey
information. An OS may legitimately assume
- no #GP based on the family/model/stepping values
- old hardware if #GP is observed upon reading (which in turn
  may mean it works in a sub-optimal way)
- brokenness if no #GP but an all zero value, but if the BKGD
  documents certain bits to be set (perhaps by the BIOS)
- whatever else

What I'm trying to express is: We simply can't get this right
unless we _fully_ emulate family/model/stepping specific
behavior (according to the values seen by the guest), with
or without migration.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-04 12:45         ` Jan Beulich
@ 2018-12-04 13:41           ` Andrew Cooper
  2018-12-04 14:07             ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-04 13:41 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 04/12/2018 12:45, Jan Beulich wrote:
>>>> On 04.12.18 at 12:26, <andrew.cooper3@citrix.com> wrote:
>> On 04/12/2018 09:45, Jan Beulich wrote:
>>> Nor can I see how hiding these MSRs from guests would improve
>>> the situation in this regard: Guests may still draw unwanted
>>> conclusions from not being able to read these MSRs, or reading
>>> all zeros.
>> I can't help but feel that the observations you've made answer the
>> question very succinctly.
>>
>> Of course we can't prevent the guest drawing conclusions from the
>> absense/presence of the information.  What we can (and must) ensure is
>> that the information that is available (i.e. a #GP fault) does not have
>> any details which are specific to the processor that the VM happened to
>> boot on.
> But that's the issue: Even #GP on such an MSR access convey
> information. An OS may legitimately assume
> - no #GP based on the family/model/stepping values
> - old hardware if #GP is observed upon reading (which in turn
>   may mean it works in a sub-optimal way)
> - brokenness if no #GP but an all zero value, but if the BKGD
>   documents certain bits to be set (perhaps by the BIOS)
> - whatever else
>
> What I'm trying to express is: We simply can't get this right
> unless we _fully_ emulate family/model/stepping specific
> behavior (according to the values seen by the guest), with
> or without migration.

These registers are completely undocumented.  The only reason we know
what two of the bits mean is because AMD had to publish them to allow
OSes to work around the speculative sidechannels.

This is *also* the reason that AMD published a spec for how virtual
machines should apply mitigations, giving them an architectural
interface to specifically avoid them trying to access these CFG
registers.  All OSes which know about DE_CFG will use MSR_VIRT_SPEC_CTRL
in preference, because they've followed AMD's instructions when putting
together SSBD mitigations.

Irrespective of the specifics in this case, Xen's behaviour for unknown
MSRs is reprehensible.  They should not be default readable (there is
still a case where windows will BSOD on migrate caused by this), and
they certainly shouldn't be write-silent-discard because it breaks code
which (correctly) uses wrmsr_safe() to probe for the availability of the
MSR.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support
  2018-12-04 13:41           ` Andrew Cooper
@ 2018-12-04 14:07             ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-04 14:07 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 04.12.18 at 14:41, <andrew.cooper3@citrix.com> wrote:
> On 04/12/2018 12:45, Jan Beulich wrote:
>>>>> On 04.12.18 at 12:26, <andrew.cooper3@citrix.com> wrote:
>>> On 04/12/2018 09:45, Jan Beulich wrote:
>>>> Nor can I see how hiding these MSRs from guests would improve
>>>> the situation in this regard: Guests may still draw unwanted
>>>> conclusions from not being able to read these MSRs, or reading
>>>> all zeros.
>>> I can't help but feel that the observations you've made answer the
>>> question very succinctly.
>>>
>>> Of course we can't prevent the guest drawing conclusions from the
>>> absense/presence of the information.  What we can (and must) ensure is
>>> that the information that is available (i.e. a #GP fault) does not have
>>> any details which are specific to the processor that the VM happened to
>>> boot on.
>> But that's the issue: Even #GP on such an MSR access convey
>> information. An OS may legitimately assume
>> - no #GP based on the family/model/stepping values
>> - old hardware if #GP is observed upon reading (which in turn
>>   may mean it works in a sub-optimal way)
>> - brokenness if no #GP but an all zero value, but if the BKGD
>>   documents certain bits to be set (perhaps by the BIOS)
>> - whatever else
>>
>> What I'm trying to express is: We simply can't get this right
>> unless we _fully_ emulate family/model/stepping specific
>> behavior (according to the values seen by the guest), with
>> or without migration.
> 
> These registers are completely undocumented.  The only reason we know
> what two of the bits mean is because AMD had to publish them to allow
> OSes to work around the speculative sidechannels.
> 
> This is *also* the reason that AMD published a spec for how virtual
> machines should apply mitigations, giving them an architectural
> interface to specifically avoid them trying to access these CFG
> registers.  All OSes which know about DE_CFG will use MSR_VIRT_SPEC_CTRL
> in preference, because they've followed AMD's instructions when putting
> together SSBD mitigations.

Seeing what this series is doing, "will use" would need to become
"will eventually use" if you want to not exclude Xen itself from the
picture. Are you sure no-one else has taken a shortcut as first
mitigation step?

> Irrespective of the specifics in this case, Xen's behaviour for unknown
> MSRs is reprehensible.  They should not be default readable (there is
> still a case where windows will BSOD on migrate caused by this), and
> they certainly shouldn't be write-silent-discard because it breaks code
> which (correctly) uses wrmsr_safe() to probe for the availability of the
> MSR.

I'm not putting under question this aspect, we're in agreement
here. What we appear to disagree about is the (apparent) claim
of yours that all is going to be well as soon as we stop this bad
but inherited behavior.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-03 16:18 ` [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support Andrew Cooper
@ 2018-12-04 16:06   ` Woods, Brian
  2018-12-05 16:39   ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-04 16:06 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Woods, Brian, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:17PM +0000, Andy Cooper wrote:
> At the time of writing, the spec is available from:
> 
>   https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
> 
> Future hardware (Zen v2) is expect to have hardware MSR_SPEC_CTRL support,
> including SPEC_CTRL.SSBD, and with the expectation that this will be directly
> passed through to guests for performance.
> 
> On currently released hardware, the only mechanism available is the legacy
> LS_CFG option, and this is very expensive to use.  Furthermore, emulating
> MSR_SPEC_CTRL via interception is prohibitively expensive, as certain OSes use
> the write-discard flexibility to simplify their entry/exit logic.
> 
> As an alternative, MSR_VIRT_SPEC_CTRL is specified as an architectural control
> (with semantics equivilent to MSR_SPEC_CTRL) which is provided by the
> hypervisor.  This abstracts away the model-specific details of the LS_CFG
> mechanism, which allows migration safety to be retained.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewd-by: Brian Woods <brian.woods@amd.com>

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
@ 2018-12-04 16:15   ` Woods, Brian
  2018-12-05 16:50   ` Jan Beulich
  2018-12-06 10:59   ` Jan Beulich
  2 siblings, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-04 16:15 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Woods, Brian, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:18PM +0000, Andy Cooper wrote:
> Introduce a new synthetic LEGACY_SSBD feature and set it if we find
> VIRT_SPEC_CTRL offered by our hypervisor, or if we find a working bit in an
> LS_CFG register.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewd-by: Brian Woods <brian.woods@amd.com>

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option
  2018-12-03 16:18 ` [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option Andrew Cooper
@ 2018-12-04 16:19   ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-04 16:19 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> bti= was introduced with the original Spectre fixes (Jan 2018), but by the
> time Speculative Store Bypass came along (May 2018), it was superceeded by the
> more generic spec-ctrl=.
> 
> Since then, we've had LazyFPU (June 2018) and L1TF (August 2018), which means
> noone will be using the option.  Remove it entirely - anyone who happens to
> accidentially be using it might now spot Xen complaining about an option it
> doesn't understand.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB
  2018-12-03 16:18 ` [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB Andrew Cooper
@ 2018-12-04 16:21   ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-04 16:21 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> This appears to be a vestigial remnent of an old version of the
> XSA-254/Spectre series, and has never been used.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features
  2018-12-03 16:18 ` [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features Andrew Cooper
@ 2018-12-04 16:28   ` Jan Beulich
  2018-12-06 12:52   ` Wei Liu
  1 sibling, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-04 16:28 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> For gen-cpuid.py, fix a comment describing self.names, and generate the
> reverse mapping in self.values.  Write out INIT_FEATURE_NAMES which maps a
> string name to a bit position.
> 
> For parse_cpuid(), introduce a slightly fuzzy strcmp() to accept changes in
> punctuation, and perform a binary search over INIT_FEATURE_NAMES.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> Slightly RFC, because I'm not entirely certain if this is a good idea or 
> not.

It is a good idea, but precautions are needed against attempts
to disable features we rely on (specifically ones inferred from us
running in 64-bit mode), or guests would equally rely on. I'm
therefore unsure whether the mere "named" fact for a feature
is a suitable criteria to allow its disabling from the command line.
Perhaps a second form of annotation in the public header is
needed? Or some minimal feature set to enforce?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-03 16:18 ` [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h Andrew Cooper
@ 2018-12-04 16:38   ` Woods, Brian
  2018-12-05 16:57   ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-04 16:38 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Roger Pau Monné, Woods, Brian, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:19PM +0000, Andy Cooper wrote:
> The need for per-core resources is a property of Fam17h hardware.  The
> mechanim for calculating / allocating space is all a bit horrible, but is the
> best which can be done at this point.  See the code comments for details.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Parts based on an earlier patch by Brian
> Signed-off-by: Signed-off-by: Brian Woods <brian.woods@amd.com>

Needs to be:
Signed-off-by: Brian Woods <brian.woods@amd.com> 

Otherwise,
Reviewed-by: Brian Woods <brian.woods@amd.com>

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface
  2018-12-03 16:18 ` [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface Andrew Cooper
@ 2018-12-04 20:27   ` Woods, Brian
  2018-12-06 10:51   ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-04 20:27 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Woods, Brian, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:20PM +0000, Andy Cooper wrote:
> It is critical that MSR_AMD64_LS_CFG is never modified outside of this
> function, to avoid trampling on sibling settings.
> 
> For now, pass in NULL from the boot paths and just set Xen's default.  Later
> patches will plumb in guest choices.  This now supercedes the older code which
> wrote to MSR_AMD64_LS_CFG once during boot.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Brian Woods <brian.woods@amd.com>

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-03 16:18 ` [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests Andrew Cooper
@ 2018-12-04 21:35   ` Woods, Brian
  2018-12-05  8:41     ` Jan Beulich
  2018-12-06 10:55   ` Jan Beulich
  1 sibling, 1 reply; 51+ messages in thread
From: Woods, Brian @ 2018-12-04 21:35 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Woods, Brian, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:21PM +0000, Andy Cooper wrote:
> The semantics of MSR_VIRT_SPEC_CTRL are that unknown bits are write-discard
> and read as zero.  Only VIRT_SPEC_CTRL.SSBD is defined at the moment.
> 
> To facilitate making this per-guest, the legacy SSBD state needs context
> switching between vcpus.  amd_ctxt_switch_legacy_ssbd() is updated to take the
> vcpus setting into account.  Furthermore, the guests chosen value needs
> preserving across migrate.
> 
> This marks a subtle change in how `ssbd=` behaves.  If Xen wishes SSBD to be
> asserted, it remains set in hardware all the time.  In the default case of Xen
> wishing SSBD not to be asserted, the value set in hardware is the guests
> choice.

Ok, we talked about this some over IRC, but I thought it would be
better to get some more eyes on this on the mailing list.

From what some engineers have said over here, it takes roughly 400
clock cycles for enabling SSBD via LS_CFG.  It isn't cheap, no, but if
the average VPCU time is 30ms, that's roughly .66%* overhead worst case
(non-turbo'd on the slowest freq processor at max speed [2GHz]).

The other thing I don't get is why advertise virtualized SSBD when the
guest setting it does nothing?  If ssbd_opt=true is set, as the code is
now, why even advertise it to the guest?  I'd suggest either allowing
the guest to turn it off or not advertise it at all (when ssbd_opt =
true).

* Twrmsr (in ms) = (400/2000000)*1000,
  percent = (Twrmsr/30ms) * 100  = .66(repeating)%

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-04 21:35   ` Woods, Brian
@ 2018-12-05  8:41     ` Jan Beulich
  2018-12-05 19:09       ` Andrew Cooper
  2018-12-06 19:41       ` Woods, Brian
  0 siblings, 2 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-05  8:41 UTC (permalink / raw)
  To: Brian Woods, Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 04.12.18 at 22:35, <Brian.Woods@amd.com> wrote:
> The other thing I don't get is why advertise virtualized SSBD when the
> guest setting it does nothing?  If ssbd_opt=true is set, as the code is
> now, why even advertise it to the guest?  I'd suggest either allowing
> the guest to turn it off or not advertise it at all (when ssbd_opt =
> true).

I think it's better to advertise the feature nevertheless: Otherwise
the guest might either try some other way of mitigating the
(believed) vulnerability, or it may report in its logs that it's vulnerable
(without mitigation) when it really isn't.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-03 16:18 ` [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support Andrew Cooper
  2018-12-04 16:06   ` Woods, Brian
@ 2018-12-05 16:39   ` Jan Beulich
  2018-12-05 17:50     ` Andrew Cooper
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-05 16:39 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> At the time of writing, the spec is available from:
> 
>   
> https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreB 
> ypassDisable_Whitepaper_final.pdf
> 
> Future hardware (Zen v2) is expect to have hardware MSR_SPEC_CTRL support,
> including SPEC_CTRL.SSBD, and with the expectation that this will be directly
> passed through to guests for performance.
> 
> On currently released hardware, the only mechanism available is the legacy
> LS_CFG option, and this is very expensive to use.  Furthermore, emulating
> MSR_SPEC_CTRL via interception is prohibitively expensive, as certain OSes use
> the write-discard flexibility to simplify their entry/exit logic.

With this, ...

> As an alternative, MSR_VIRT_SPEC_CTRL is specified as an architectural control
> (with semantics equivilent to MSR_SPEC_CTRL) which is provided by the
> hypervisor.  This abstracts away the model-specific details of the LS_CFG
> mechanism, which allows migration safety to be retained.

... how is this any less expensive, when it necessarily requires
interception? At least the way things are worded, I'm getting
the impression that you consider this less expensive.

> --- a/tools/libxl/libxl_cpuid.c
> +++ b/tools/libxl/libxl_cpuid.c
> @@ -245,6 +245,11 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
>          {"invtsc",       0x80000007, NA, CPUID_REG_EDX,  8,  1},
>  
>          {"ibpb",         0x80000008, NA, CPUID_REG_EBX, 12,  1},
> +        {"amd_ibrs",     0x80000008, NA, CPUID_REG_EBX, 14,  1},

While of the following two the names indeed clash with Intel's, the
above one doesn't. Any reason you still gave it an amd_ prefix?

> +        {"amd_stibp",    0x80000008, NA, CPUID_REG_EBX, 15,  1},
> +        {"amd_ssbd",     0x80000008, NA, CPUID_REG_EBX, 24,  1},
> +        {"virt_sc_ssbd", 0x80000008, NA, CPUID_REG_EBX, 25,  1},
> +        {"amd_ssb_no",   0x80000008, NA, CPUID_REG_EBX, 26,  1},

Since you're at it, why not also introduce names for bits 16-18
at this occasion?

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -140,6 +140,10 @@ static const char *str_e8b[32] =
>      [ 0] = "clzero",
>  
>      [12] = "ibpb",
> +    [14] = "amd_ibrs",      [15] = "amd_stibp",
> +
> +    [24] = "amd_ssbd",      [25] = "virt_sc_ssbd",
> +    [26] = "amd_ssb_no",

Same comments here.

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -238,6 +238,11 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
>  /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
>  XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
>  XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
> +XEN_CPUFEATURE(AMD_IBRS,      8*32+14) /*   MSR_SPEC_CTRL.IBRS available */
> +XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*   MSR_SPEC_CTRL.STIBP available */
> +XEN_CPUFEATURE(AMD_SSBD,      8*32+24) /*   MSR_SPEC_CTRL.SSBD available */
> +XEN_CPUFEATURE(VIRT_SC_SSBD,  8*32+25) /*   MSR_VIRT_SPEC_CTRL.SSBD available. */
> +XEN_CPUFEATURE(AMD_SSB_NO,    8*32+26) /*   Hardware not vulnerable to SSB */

And here.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
  2018-12-04 16:15   ` Woods, Brian
@ 2018-12-05 16:50   ` Jan Beulich
  2018-12-05 17:09     ` Andrew Cooper
  2018-12-06 10:59   ` Jan Beulich
  2 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-05 16:50 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> +static void __init noinline amd_probe_legacy_ssbd(void)
> +{
> +	uint64_t new;
> +
> +	/*
> +	 * Search for mechanisms of controlling Memory Disambiguation.
> +	 *
> +	 * If the CPU reports that it is fixed, there is nothing to do.  If we
> +	 * have an architectural MSR_SPEC_CTRL.SSBD control, leave everything
> +	 * to the common code.
> +	 */
> +	if (cpu_has_amd_ssb_no || cpu_has_amd_ssbd)
> +		return;
> +
> +	/* Use MSR_VIRT_SPEC_CTRL if our hypervisor offers it. */
> +	if (cpu_has_virt_sc_ssbd) {
> +		setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);

The only issue I have is with this double meaning of the new
synthetic feature here and ...

> +		return;
> +	}
> +
> +	/* Probe for LS_CFG settings. */
> +	switch (boot_cpu_data.x86) {
> +	default: return; /* No known LS_CFG settings. */
> +	case 0x15: ls_cfg_ssbd_mask = 1ull << 54; break;
> +	case 0x16: ls_cfg_ssbd_mask = 1ull << 33; break;
> +	case 0x17: ls_cfg_ssbd_mask = 1ull << 10; break;
> +	}
> +
> +	/*
> +	 * MSR_AMD64_LS_CFG isn't architectural, and may not be virtualised
> +	 * fully.  Check that we can actually flip the bit before concluding
> +	 * that LS_CFG is available for use.
> +	 */
> +	if (rdmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base) ||
> +	    wrmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base ^ ls_cfg_ssbd_mask))
> +		return;
> +
> +	rdmsrl(MSR_AMD64_LS_CFG, new);
> +	if (new != (ls_cfg_base ^ ls_cfg_ssbd_mask))
> +		return;
> +
> +	/*
> +	 * Leave ls_cfg_base with the bit clear.  This is Xen's overall
> +	 * default, and it simplifies the context switch logic.
> +	 */
> +	ls_cfg_base &= ~ls_cfg_ssbd_mask;
> +	if ((new != ls_cfg_base) && wrmsr_safe(MSR_AMD64_LS_CFG, ls_cfg_base))
> +		return;
> +
> +	/* LS_CFG appears to work fully.  Lets choose to use it. */
> +	setup_force_cpu_cap(X86_FEATURE_LEGACY_SSBD);

... here. Granted you explicitly say so ...

> --- a/xen/include/asm-x86/cpufeatures.h
> +++ b/xen/include/asm-x86/cpufeatures.h
> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */

... here, but I still will need to see how this gets used before
giving my ack here. Additionally I can see "legacy" as a suitable
name for the LS_CFG approach, but does this also fit the
VIRT_SPEC_CTRL one?

One other question: There's now redundant code in init_amd()
handling opt_ssbd. Would it not fit here to remove/replace that?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-03 16:18 ` [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h Andrew Cooper
  2018-12-04 16:38   ` Woods, Brian
@ 2018-12-05 16:57   ` Jan Beulich
  2018-12-05 17:05     ` Andrew Cooper
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-05 16:57 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
>  }
>  
>  /*
> + * This is all a gross hack, but Xen really doesn't have flexible-enough
> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
> + * spinlock to synchronise updates of the MSR.
> + *
> + * We can't use per-cpu state because taking one CPU offline would free state
> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
> + * path, but by the time the sibling information is calculated sufficiently
> + * for us to locate the per-core state, it's too late to fail the AP boot.
> + *
> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
> + * unable to safely use LS_CFG.
> + *
> + * Therefore, we have to allocate for the worse-case scenario, which is
> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
> + * off, as this is fractionally better than failing to boot.
> + */
> +static struct ssbd_ls_cfg {
> +	spinlock_t lock;
> +	unsigned int disable_count;
> +} *ssbd_ls_cfg[4];

Same question as to Brian for his original code: Instead of the
hard-coding of 4, can't you use nr_sockets here?
smp_prepare_cpus() runs before pre-SMP initcalls after all.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-05 16:57   ` Jan Beulich
@ 2018-12-05 17:05     ` Andrew Cooper
  2018-12-06  8:54       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-05 17:05 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 05/12/2018 16:57, Jan Beulich wrote:
>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
>>  }
>>  
>>  /*
>> + * This is all a gross hack, but Xen really doesn't have flexible-enough
>> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
>> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
>> + * spinlock to synchronise updates of the MSR.
>> + *
>> + * We can't use per-cpu state because taking one CPU offline would free state
>> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
>> + * path, but by the time the sibling information is calculated sufficiently
>> + * for us to locate the per-core state, it's too late to fail the AP boot.
>> + *
>> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
>> + * unable to safely use LS_CFG.
>> + *
>> + * Therefore, we have to allocate for the worse-case scenario, which is
>> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
>> + * off, as this is fractionally better than failing to boot.
>> + */
>> +static struct ssbd_ls_cfg {
>> +	spinlock_t lock;
>> +	unsigned int disable_count;
>> +} *ssbd_ls_cfg[4];
> Same question as to Brian for his original code: Instead of the
> hard-coding of 4, can't you use nr_sockets here?
> smp_prepare_cpus() runs before pre-SMP initcalls after all.

nr_sockets has zero connection with reality as far as I can tell.

On this particular box it reports 6 when the correct answer is 2.  I've
got some Intel boxes where nr_sockets reports 15 and the correct answer
is 4.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-05 16:50   ` Jan Beulich
@ 2018-12-05 17:09     ` Andrew Cooper
  2018-12-06  8:53       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-05 17:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 05/12/2018 16:50, Jan Beulich wrote:
>
>> --- a/xen/include/asm-x86/cpufeatures.h
>> +++ b/xen/include/asm-x86/cpufeatures.h
>> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
>>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
>>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
>>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
>> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */
> ... here, but I still will need to see how this gets used before
> giving my ack here. Additionally I can see "legacy" as a suitable
> name for the LS_CFG approach, but does this also fit the
> VIRT_SPEC_CTRL one?

In practice, VIRT_SPEC_CTRL means "your hypervisor is using LS_CFG on
your behalf".

As to the joint meaning, that's because it is the most appropriate (i.e.
simple) way to structure the code.

>
> One other question: There's now redundant code in init_amd()
> handling opt_ssbd. Would it not fit here to remove/replace that?

The code in init_amd() only becomes redundant in patch 7, and that is
where it is removed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-05 16:39   ` Jan Beulich
@ 2018-12-05 17:50     ` Andrew Cooper
  2018-12-06  8:49       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-05 17:50 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 05/12/2018 16:39, Jan Beulich wrote:
>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>> At the time of writing, the spec is available from:
>>
>>   
>> https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreB 
>> ypassDisable_Whitepaper_final.pdf
>>
>> Future hardware (Zen v2) is expect to have hardware MSR_SPEC_CTRL support,
>> including SPEC_CTRL.SSBD, and with the expectation that this will be directly
>> passed through to guests for performance.
>>
>> On currently released hardware, the only mechanism available is the legacy
>> LS_CFG option, and this is very expensive to use.  Furthermore, emulating
>> MSR_SPEC_CTRL via interception is prohibitively expensive, as certain OSes use
>> the write-discard flexibility to simplify their entry/exit logic.
> With this, ...
>
>> As an alternative, MSR_VIRT_SPEC_CTRL is specified as an architectural control
>> (with semantics equivilent to MSR_SPEC_CTRL) which is provided by the
>> hypervisor.  This abstracts away the model-specific details of the LS_CFG
>> mechanism, which allows migration safety to be retained.
> ... how is this any less expensive, when it necessarily requires
> interception?

For an individual update?  No difference.

For the case where a certain OS takes the presence of MSR_SPEC_CTRL to
mean that it needs to write MSR_SPEC_CTRL.IBRS wherever it would choose
to if IBRS was actually available, the perf difference is in the number
of writes which occur and get intercepted.

>  At least the way things are worded, I'm getting
> the impression that you consider this less expensive.
>
>> --- a/tools/libxl/libxl_cpuid.c
>> +++ b/tools/libxl/libxl_cpuid.c
>> @@ -245,6 +245,11 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
>>          {"invtsc",       0x80000007, NA, CPUID_REG_EDX,  8,  1},
>>  
>>          {"ibpb",         0x80000008, NA, CPUID_REG_EBX, 12,  1},
>> +        {"amd_ibrs",     0x80000008, NA, CPUID_REG_EBX, 14,  1},
> While of the following two the names indeed clash with Intel's, the
> above one doesn't. Any reason you still gave it an amd_ prefix?

I really really wish we didn't have duplicate bits, or that I'd started
with blanket Intel and AMD prefixes.

I deliberately chose "IBRSB" for the Intel bit due to its double
meaning, but using just IBRS here on its own is liable to get confused
with the Intel bit.

>
>> +        {"amd_stibp",    0x80000008, NA, CPUID_REG_EBX, 15,  1},
>> +        {"amd_ssbd",     0x80000008, NA, CPUID_REG_EBX, 24,  1},
>> +        {"virt_sc_ssbd", 0x80000008, NA, CPUID_REG_EBX, 25,  1},
>> +        {"amd_ssb_no",   0x80000008, NA, CPUID_REG_EBX, 26,  1},
> Since you're at it, why not also introduce names for bits 16-18
> at this occasion?

I haven't previously filled in names for the sake of it.

The reason that ibrs/stibp/ssbd are here is because they're related and
I've also got a followon few patches to support MSR_VIRT_SPEC_CTRL on
Rome hardware via MSR_SPEC_CTRL, but I need an SDP and some
experimentation time before I'd be happy posting them.

But to address your question, I can't locate those bits at all.  Not
even in the NDA docs or Linux source.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-05  8:41     ` Jan Beulich
@ 2018-12-05 19:09       ` Andrew Cooper
  2018-12-06  8:59         ` Jan Beulich
  2018-12-06 19:41       ` Woods, Brian
  1 sibling, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-05 19:09 UTC (permalink / raw)
  To: Jan Beulich, Brian Woods; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

On 05/12/2018 08:41, Jan Beulich wrote:
>>>> On 04.12.18 at 22:35, <Brian.Woods@amd.com> wrote:
>> The other thing I don't get is why advertise virtualized SSBD when the
>> guest setting it does nothing?  If ssbd_opt=true is set, as the code is
>> now, why even advertise it to the guest?  I'd suggest either allowing
>> the guest to turn it off or not advertise it at all (when ssbd_opt =
>> true).
> I think it's better to advertise the feature nevertheless: Otherwise
> the guest might either try some other way of mitigating the
> (believed) vulnerability, or it may report in its logs that it's vulnerable
> (without mitigation) when it really isn't.

opt_ssbd=true is there for the truly paranoid, and noone uses it in
practice.  It is a substantial performance hit for a corner case which
doesn't really manifest in compiled code (Furthermore, it is a corner
case which is far harder to hit on AMD hardware, as memory operands with
the same base pointer are guaranteed not to speculatively pass).  JIT'ed
environments are the ones at risk, and that is why there is a prctl()
for fine grained control.

The ~100% common case is opt_ssbd=0 from the host administrators point
of view, spec_store_bypass_disable=prctl (seriously - when did cmdline
options turn into essays?) from the guest kernel's point of view, and
certain processes specifically opting into SSBD via prctl().

However, we need MSR_VIRT_SPEC_CTRL to function correctly even when it
doesn't have an effect in hardware.

One contrived scenario is when a customer has opt_ssbd=0 on one server,
and opt_ssbd=1 on the other, and wants VMs to migrate between the two.

A more realistic scenario is when a customer has purchased some Rome
hardware, and wants to upgrade their systems.  They will want to migrate
their VMs onto newer hardware, which will involve keeping the
guest-visible MSR_VIRT_SPEC_CTRL working as before.

Hopefully I'll have default-vs-max CPUID policies working by that point,
at which the default policy on Rome should have MSR_SPEC_CTRL, and the
max policy have both MSR_SPEC_CTRL and MSR_VIRT_SPEC_CTRL, so we can
safely migrate in VMs which are using it, but not offer it by default.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-05 17:50     ` Andrew Cooper
@ 2018-12-06  8:49       ` Jan Beulich
  2018-12-06 18:35         ` Andrew Cooper
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-06  8:49 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 05.12.18 at 18:50, <andrew.cooper3@citrix.com> wrote:
> On 05/12/2018 16:39, Jan Beulich wrote:
>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>> As an alternative, MSR_VIRT_SPEC_CTRL is specified as an architectural control
>>> (with semantics equivilent to MSR_SPEC_CTRL) which is provided by the
>>> hypervisor.  This abstracts away the model-specific details of the LS_CFG
>>> mechanism, which allows migration safety to be retained.
>> ... how is this any less expensive, when it necessarily requires
>> interception?
> 
> For an individual update?  No difference.
> 
> For the case where a certain OS takes the presence of MSR_SPEC_CTRL to
> mean that it needs to write MSR_SPEC_CTRL.IBRS wherever it would choose
> to if IBRS was actually available, the perf difference is in the number
> of writes which occur and get intercepted.

So perhaps an OS anomaly; I'd certainly expect no difference in
behavior wrt the physical MSR's availability and the virtual one's.
Of course unless they've not implemented support for the virtual
one yet, but then as soon as they do the difference ought to
vanish.

>>> +        {"amd_stibp",    0x80000008, NA, CPUID_REG_EBX, 15,  1},
>>> +        {"amd_ssbd",     0x80000008, NA, CPUID_REG_EBX, 24,  1},
>>> +        {"virt_sc_ssbd", 0x80000008, NA, CPUID_REG_EBX, 25,  1},
>>> +        {"amd_ssb_no",   0x80000008, NA, CPUID_REG_EBX, 26,  1},
>> Since you're at it, why not also introduce names for bits 16-18
>> at this occasion?
> 
> I haven't previously filled in names for the sake of it.
> 
> The reason that ibrs/stibp/ssbd are here is because they're related and
> I've also got a followon few patches to support MSR_VIRT_SPEC_CTRL on
> Rome hardware via MSR_SPEC_CTRL, but I need an SDP and some
> experimentation time before I'd be happy posting them.
> 
> But to address your question, I can't locate those bits at all.  Not
> even in the NDA docs or Linux source.

Hmm, that's certainly odd. I've found them quite some time ago in this
public whitepaper:
https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf
They're all clearly IBRS/STIBP related.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-05 17:09     ` Andrew Cooper
@ 2018-12-06  8:53       ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-06  8:53 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 05.12.18 at 18:09, <andrew.cooper3@citrix.com> wrote:
> On 05/12/2018 16:50, Jan Beulich wrote:
>>
>>> --- a/xen/include/asm-x86/cpufeatures.h
>>> +++ b/xen/include/asm-x86/cpufeatures.h
>>> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
>>>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
>>>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
>>>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
>>> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */
>> ... here, but I still will need to see how this gets used before
>> giving my ack here. Additionally I can see "legacy" as a suitable
>> name for the LS_CFG approach, but does this also fit the
>> VIRT_SPEC_CTRL one?
> 
> In practice, VIRT_SPEC_CTRL means "your hypervisor is using LS_CFG on
> your behalf".
> 
> As to the joint meaning, that's because it is the most appropriate (i.e.
> simple) way to structure the code.

Synthetic feature bits, other than simple boolean variables, are
mainly (if not exclusively) meant to allow keying off of them
alternatives patching. For two different approaches like the
ones here this seems unlikely to be the goal, but since I didn't
make it to the end of the series yet, I didn't want to judge early.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-05 17:05     ` Andrew Cooper
@ 2018-12-06  8:54       ` Jan Beulich
  2018-12-06 18:46         ` Andrew Cooper
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-06  8:54 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 05.12.18 at 18:05, <andrew.cooper3@citrix.com> wrote:
> On 05/12/2018 16:57, Jan Beulich wrote:
>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/cpu/amd.c
>>> +++ b/xen/arch/x86/cpu/amd.c
>>> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
>>>  }
>>>  
>>>  /*
>>> + * This is all a gross hack, but Xen really doesn't have flexible-enough
>>> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
>>> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
>>> + * spinlock to synchronise updates of the MSR.
>>> + *
>>> + * We can't use per-cpu state because taking one CPU offline would free state
>>> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
>>> + * path, but by the time the sibling information is calculated sufficiently
>>> + * for us to locate the per-core state, it's too late to fail the AP boot.
>>> + *
>>> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
>>> + * unable to safely use LS_CFG.
>>> + *
>>> + * Therefore, we have to allocate for the worse-case scenario, which is
>>> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
>>> + * off, as this is fractionally better than failing to boot.
>>> + */
>>> +static struct ssbd_ls_cfg {
>>> +	spinlock_t lock;
>>> +	unsigned int disable_count;
>>> +} *ssbd_ls_cfg[4];
>> Same question as to Brian for his original code: Instead of the
>> hard-coding of 4, can't you use nr_sockets here?
>> smp_prepare_cpus() runs before pre-SMP initcalls after all.
> 
> nr_sockets has zero connection with reality as far as I can tell.
> 
> On this particular box it reports 6 when the correct answer is 2.  I've
> got some Intel boxes where nr_sockets reports 15 and the correct answer
> is 4.

If you look back at when it was introduced, the main goal was
for it to never be too low. Any improvements to its calculation
are welcome, provided they maintain that guarantee. To high
a socket count is imo still better than a hard-coded one.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-05 19:09       ` Andrew Cooper
@ 2018-12-06  8:59         ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-06  8:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 05.12.18 at 20:09, <andrew.cooper3@citrix.com> wrote:
> On 05/12/2018 08:41, Jan Beulich wrote:
>>>>> On 04.12.18 at 22:35, <Brian.Woods@amd.com> wrote:
>>> The other thing I don't get is why advertise virtualized SSBD when the
>>> guest setting it does nothing?  If ssbd_opt=true is set, as the code is
>>> now, why even advertise it to the guest?  I'd suggest either allowing
>>> the guest to turn it off or not advertise it at all (when ssbd_opt =
>>> true).
>> I think it's better to advertise the feature nevertheless: Otherwise
>> the guest might either try some other way of mitigating the
>> (believed) vulnerability, or it may report in its logs that it's vulnerable
>> (without mitigation) when it really isn't.
> 
> opt_ssbd=true is there for the truly paranoid, and noone uses it in
> practice.

Be careful with such claims. From logs I've seen I know different. (I'm
not going to claim though that I'm sure they really know why they do
so, but you need to accept reasons like "just to be on the safe side".)

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface
  2018-12-03 16:18 ` [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface Andrew Cooper
  2018-12-04 20:27   ` Woods, Brian
@ 2018-12-06 10:51   ` Jan Beulich
  2018-12-06 18:55     ` Andrew Cooper
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-06 10:51 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -442,6 +442,74 @@ static struct ssbd_ls_cfg {
>  } *ssbd_ls_cfg[4];
>  static unsigned int ssbd_max_cores;
>  
> +/*
> + * Must only be called when the LEGACY_SSBD is in used.  Called with NULL to

... when LEGACY_SSBD is in use ... ?

> + * switch back to Xen's default value.
> + */
> +void amd_ctxt_switch_legacy_ssbd(const struct vcpu *next)
> +{
> +	static DEFINE_PER_CPU(bool, ssbd);
> +	bool *this_ssbd = &this_cpu(ssbd);
> +	bool disable = opt_ssbd;
> +	struct cpuinfo_x86 *c = &current_cpu_data;

const

> +	unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
> +	struct ssbd_ls_cfg *cfg;
> +	uint64_t val;
> +
> +	ASSERT(cpu_has_legacy_ssbd);
> +
> +	/*
> +	 * Update hardware lazily, as these MSRs are expensive.  However, on
> +	 * the boot paths which pass NULL, force a write to set a consistent
> +	 * initial state.
> +	 */
> +	if (*this_ssbd == disable && next)
> +		return;
> +
> +	if (cpu_has_virt_sc_ssbd) {
> +		wrmsrl(MSR_VIRT_SPEC_CTRL,
> +		       disable ? SPEC_CTRL_SSBD : 0);
> +		goto done;
> +	}
> +
> +	val = ls_cfg_base | (disable ? ls_cfg_ssbd_mask : 0);
> +
> +	if (c->x86 < 0x17 || c->x86_num_siblings == 1) {
> +		/* No threads to be concerned with. */
> +		wrmsrl(MSR_AMD64_LS_CFG, val);
> +		goto done;
> +	}
> +
> +	/* Check that we won't overflow the worse-case allocation. */
> +	BUG_ON(socket >= ARRAY_SIZE(ssbd_ls_cfg));
> +	BUG_ON(core   >= ssbd_max_cores);

Wouldn't it be better to fail onlining of such CPUs?

> +	cfg = &ssbd_ls_cfg[socket][core];
> +
> +	if (disable) {
> +		spin_lock(&cfg->lock);
> +
> +		/* First sibling to disable updates hardware. */
> +		if (!cfg->disable_count)
> +			wrmsrl(MSR_AMD64_LS_CFG, val);
> +		cfg->disable_count++;
> +
> +		spin_unlock(&cfg->lock);
> +	} else {
> +		spin_lock(&cfg->lock);
> +
> +		/* Last sibling to enable updates hardware. */
> +		cfg->disable_count--;
> +		if (!cfg->disable_count)
> +			wrmsrl(MSR_AMD64_LS_CFG, val);
> +
> +		spin_unlock(&cfg->lock);
> +	}

Any reason for duplicating the spin_{,un}lock() calls?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-03 16:18 ` [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests Andrew Cooper
  2018-12-04 21:35   ` Woods, Brian
@ 2018-12-06 10:55   ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-06 10:55 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> The semantics of MSR_VIRT_SPEC_CTRL are that unknown bits are write-discard
> and read as zero.  Only VIRT_SPEC_CTRL.SSBD is defined at the moment.
> 
> To facilitate making this per-guest, the legacy SSBD state needs context
> switching between vcpus.  amd_ctxt_switch_legacy_ssbd() is updated to take the
> vcpus setting into account.  Furthermore, the guests chosen value needs
> preserving across migrate.
> 
> This marks a subtle change in how `ssbd=` behaves.  If Xen wishes SSBD to be
> asserted, it remains set in hardware all the time.  In the default case of Xen
> wishing SSBD not to be asserted, the value set in hardware is the guests
> choice.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests
  2018-12-03 16:18 ` [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests Andrew Cooper
@ 2018-12-06 10:57   ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-06 10:57 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -370,6 +370,16 @@ static void __init guest_common_feature_adjustments(uint32_t *fs)
>       */
>      if ( host_cpuid_policy.feat.ibrsb )
>          __set_bit(X86_FEATURE_IBPB, fs);
> +
> +    /*
> +     * In practice, we can offer VIRT_SC_SSBD on any hardware with legacy_ssbd
> +     * or msr_spec_ctrl, but until we've got a proper split between default
> +     * and max policies, avoid offering it in cases where the guest shouldn't
> +     * be using it.
> +     */
> +    __clear_bit(X86_FEATURE_VIRT_SC_SSBD, fs);
> +    if ( cpu_has_legacy_ssbd )
> +        __set_bit(X86_FEATURE_VIRT_SC_SSBD, fs);
>  }

Is the comment really correct wrt msr_spec_ctrl? The MSR can
exist without there being support for SSBD, can't it?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
  2018-12-04 16:15   ` Woods, Brian
  2018-12-05 16:50   ` Jan Beulich
@ 2018-12-06 10:59   ` Jan Beulich
  2018-12-28 16:30     ` Andrew Cooper
  2 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2018-12-06 10:59 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Wei Liu, Brian Woods, Roger Pau Monne

>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/include/asm-x86/cpufeatures.h
> +++ b/xen/include/asm-x86/cpufeatures.h
> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */

As already indicated in another reply, with there not being
any alternatives patching based on this, I don't think it should
be a synthetic feature. Use an ordinary bool variable instead.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features
  2018-12-03 16:18 ` [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features Andrew Cooper
  2018-12-04 16:28   ` Jan Beulich
@ 2018-12-06 12:52   ` Wei Liu
  1 sibling, 0 replies; 51+ messages in thread
From: Wei Liu @ 2018-12-06 12:52 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Mon, Dec 03, 2018 at 04:18:16PM +0000, Andrew Cooper wrote:
> For gen-cpuid.py, fix a comment describing self.names, and generate the
> reverse mapping in self.values.  Write out INIT_FEATURE_NAMES which maps a
> string name to a bit position.
> 
> For parse_cpuid(), introduce a slightly fuzzy strcmp() to accept changes in
> punctuation, and perform a binary search over INIT_FEATURE_NAMES.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> Slightly RFC, because I'm not entirely certain if this is a good idea or not.
> ---
>  xen/arch/x86/cpuid.c   | 91 ++++++++++++++++++++++++++++++++++++--------------
>  xen/tools/gen-cpuid.py | 22 ++++++++++--
>  2 files changed, 86 insertions(+), 27 deletions(-)
> 
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index 0591a7d..eb86a86 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -18,9 +18,34 @@ static const uint32_t hvm_shadow_featuremask[] = INIT_HVM_SHADOW_FEATURES;
>  static const uint32_t hvm_hap_featuremask[] = INIT_HVM_HAP_FEATURES;
>  static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>  
> +/*
> + * Works like strcmp(), but customised specifically for this usecase.  'name'
> + * is a NUL terminated string.  's' is considered to match 'name' if the NUL
> + * terminator of 'name' match punctiation in 's'.

"punctuation".

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support
  2018-12-06  8:49       ` Jan Beulich
@ 2018-12-06 18:35         ` Andrew Cooper
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-06 18:35 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 06/12/2018 08:49, Jan Beulich wrote:
>>>> +        {"amd_stibp",    0x80000008, NA, CPUID_REG_EBX, 15,  1},
>>>> +        {"amd_ssbd",     0x80000008, NA, CPUID_REG_EBX, 24,  1},
>>>> +        {"virt_sc_ssbd", 0x80000008, NA, CPUID_REG_EBX, 25,  1},
>>>> +        {"amd_ssb_no",   0x80000008, NA, CPUID_REG_EBX, 26,  1},
>>> Since you're at it, why not also introduce names for bits 16-18
>>> at this occasion?
>> I haven't previously filled in names for the sake of it.
>>
>> The reason that ibrs/stibp/ssbd are here is because they're related and
>> I've also got a followon few patches to support MSR_VIRT_SPEC_CTRL on
>> Rome hardware via MSR_SPEC_CTRL, but I need an SDP and some
>> experimentation time before I'd be happy posting them.
>>
>> But to address your question, I can't locate those bits at all.  Not
>> even in the NDA docs or Linux source.
> Hmm, that's certainly odd. I've found them quite some time ago in this
> public whitepaper:
> https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf
> They're all clearly IBRS/STIBP related.

Oh - I'd completely forgotten about that whitepaper.  Some of the
details are superseded by the SSBD paper.

For now, I'll drop the bit names for features not used in this series. 
One way or another, doing anything with the others will require some
experimentation on hardware which supports them.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-06  8:54       ` Jan Beulich
@ 2018-12-06 18:46         ` Andrew Cooper
  2018-12-06 19:25           ` Woods, Brian
  2018-12-07 10:17           ` Jan Beulich
  0 siblings, 2 replies; 51+ messages in thread
From: Andrew Cooper @ 2018-12-06 18:46 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 06/12/2018 08:54, Jan Beulich wrote:
>>>> On 05.12.18 at 18:05, <andrew.cooper3@citrix.com> wrote:
>> On 05/12/2018 16:57, Jan Beulich wrote:
>>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>>> --- a/xen/arch/x86/cpu/amd.c
>>>> +++ b/xen/arch/x86/cpu/amd.c
>>>> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
>>>>  }
>>>>  
>>>>  /*
>>>> + * This is all a gross hack, but Xen really doesn't have flexible-enough
>>>> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
>>>> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
>>>> + * spinlock to synchronise updates of the MSR.
>>>> + *
>>>> + * We can't use per-cpu state because taking one CPU offline would free state
>>>> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
>>>> + * path, but by the time the sibling information is calculated sufficiently
>>>> + * for us to locate the per-core state, it's too late to fail the AP boot.
>>>> + *
>>>> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
>>>> + * unable to safely use LS_CFG.
>>>> + *
>>>> + * Therefore, we have to allocate for the worse-case scenario, which is
>>>> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
>>>> + * off, as this is fractionally better than failing to boot.
>>>> + */
>>>> +static struct ssbd_ls_cfg {
>>>> +	spinlock_t lock;
>>>> +	unsigned int disable_count;
>>>> +} *ssbd_ls_cfg[4];
>>> Same question as to Brian for his original code: Instead of the
>>> hard-coding of 4, can't you use nr_sockets here?
>>> smp_prepare_cpus() runs before pre-SMP initcalls after all.
>> nr_sockets has zero connection with reality as far as I can tell.
>>
>> On this particular box it reports 6 when the correct answer is 2.  I've
>> got some Intel boxes where nr_sockets reports 15 and the correct answer
>> is 4.
> If you look back at when it was introduced, the main goal was
> for it to never be too low. Any improvements to its calculation
> are welcome, provided they maintain that guarantee. To high
> a socket count is imo still better than a hard-coded one.

Even for the extra 2k of memory it will waste?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface
  2018-12-06 10:51   ` Jan Beulich
@ 2018-12-06 18:55     ` Andrew Cooper
  2018-12-07 10:25       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-06 18:55 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

On 06/12/2018 10:51, Jan Beulich wrote:
>
>> +	unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
>> +	struct ssbd_ls_cfg *cfg;
>> +	uint64_t val;
>> +
>> +	ASSERT(cpu_has_legacy_ssbd);
>> +
>> +	/*
>> +	 * Update hardware lazily, as these MSRs are expensive.  However, on
>> +	 * the boot paths which pass NULL, force a write to set a consistent
>> +	 * initial state.
>> +	 */
>> +	if (*this_ssbd == disable && next)
>> +		return;
>> +
>> +	if (cpu_has_virt_sc_ssbd) {
>> +		wrmsrl(MSR_VIRT_SPEC_CTRL,
>> +		       disable ? SPEC_CTRL_SSBD : 0);
>> +		goto done;
>> +	}
>> +
>> +	val = ls_cfg_base | (disable ? ls_cfg_ssbd_mask : 0);
>> +
>> +	if (c->x86 < 0x17 || c->x86_num_siblings == 1) {
>> +		/* No threads to be concerned with. */
>> +		wrmsrl(MSR_AMD64_LS_CFG, val);
>> +		goto done;
>> +	}
>> +
>> +	/* Check that we won't overflow the worse-case allocation. */
>> +	BUG_ON(socket >= ARRAY_SIZE(ssbd_ls_cfg));
>> +	BUG_ON(core   >= ssbd_max_cores);
> Wouldn't it be better to fail onlining of such CPUs?

How?  We've not currently got an ability to fail in the middle of
start_secondary(), which is why the previous patch really does go an
allocate the worst case.

These are here because I don't trust really trust the topology logic
(which turned out to be very wise, in retrospect), not because I
actually expect them to trigger from now on.

>
>> +	cfg = &ssbd_ls_cfg[socket][core];
>> +
>> +	if (disable) {
>> +		spin_lock(&cfg->lock);
>> +
>> +		/* First sibling to disable updates hardware. */
>> +		if (!cfg->disable_count)
>> +			wrmsrl(MSR_AMD64_LS_CFG, val);
>> +		cfg->disable_count++;
>> +
>> +		spin_unlock(&cfg->lock);
>> +	} else {
>> +		spin_lock(&cfg->lock);
>> +
>> +		/* Last sibling to enable updates hardware. */
>> +		cfg->disable_count--;
>> +		if (!cfg->disable_count)
>> +			wrmsrl(MSR_AMD64_LS_CFG, val);
>> +
>> +		spin_unlock(&cfg->lock);
>> +	}
> Any reason for duplicating the spin_{,un}lock() calls?

To avoid having a context-dependent jump in the critical region.  Then
again, I suppose that is completely dwarfed by the WRMSR.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-06 18:46         ` Andrew Cooper
@ 2018-12-06 19:25           ` Woods, Brian
  2018-12-07 10:17           ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-06 19:25 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Xen-devel, Woods, Brian, Jan Beulich, Roger Pau Monne

On Thu, Dec 06, 2018 at 06:46:51PM +0000, Andy Cooper wrote:
> On 06/12/2018 08:54, Jan Beulich wrote:
> >>>> On 05.12.18 at 18:05, <andrew.cooper3@citrix.com> wrote:
> >> On 05/12/2018 16:57, Jan Beulich wrote:
> >>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
> >>>> --- a/xen/arch/x86/cpu/amd.c
> >>>> +++ b/xen/arch/x86/cpu/amd.c
> >>>> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
> >>>>  }
> >>>>  
> >>>>  /*
> >>>> + * This is all a gross hack, but Xen really doesn't have flexible-enough
> >>>> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
> >>>> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
> >>>> + * spinlock to synchronise updates of the MSR.
> >>>> + *
> >>>> + * We can't use per-cpu state because taking one CPU offline would free state
> >>>> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
> >>>> + * path, but by the time the sibling information is calculated sufficiently
> >>>> + * for us to locate the per-core state, it's too late to fail the AP boot.
> >>>> + *
> >>>> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
> >>>> + * unable to safely use LS_CFG.
> >>>> + *
> >>>> + * Therefore, we have to allocate for the worse-case scenario, which is
> >>>> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
> >>>> + * off, as this is fractionally better than failing to boot.
> >>>> + */
> >>>> +static struct ssbd_ls_cfg {
> >>>> +	spinlock_t lock;
> >>>> +	unsigned int disable_count;
> >>>> +} *ssbd_ls_cfg[4];
> >>> Same question as to Brian for his original code: Instead of the
> >>> hard-coding of 4, can't you use nr_sockets here?
> >>> smp_prepare_cpus() runs before pre-SMP initcalls after all.
> >> nr_sockets has zero connection with reality as far as I can tell.
> >>
> >> On this particular box it reports 6 when the correct answer is 2.  I've
> >> got some Intel boxes where nr_sockets reports 15 and the correct answer
> >> is 4.
> > If you look back at when it was introduced, the main goal was
> > for it to never be too low. Any improvements to its calculation
> > are welcome, provided they maintain that guarantee. To high
> > a socket count is imo still better than a hard-coded one.
> 
> Even for the extra 2k of memory it will waste?
> 
> ~Andrew

Just as a side note, for processors using MSR LS_CFG and have SMT
enabled (F17h), there should only be 2 physical sockets.  The 4 was a
worst case (and before some other information was available).
Realistically, there should only be a max of 2 physical sockets when
this needed.  Although, having 4 could be nice as a safe buffer and
only costs 16 bytes.

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests
  2018-12-05  8:41     ` Jan Beulich
  2018-12-05 19:09       ` Andrew Cooper
@ 2018-12-06 19:41       ` Woods, Brian
  1 sibling, 0 replies; 51+ messages in thread
From: Woods, Brian @ 2018-12-06 19:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Xen-devel, Woods, Brian, Wei Liu, Roger Pau Monne

On Wed, Dec 05, 2018 at 01:41:30AM -0700, Jan Beulich wrote:
> >>> On 04.12.18 at 22:35, <Brian.Woods@amd.com> wrote:
> > The other thing I don't get is why advertise virtualized SSBD when the
> > guest setting it does nothing?  If ssbd_opt=true is set, as the code is
> > now, why even advertise it to the guest?  I'd suggest either allowing
> > the guest to turn it off or not advertise it at all (when ssbd_opt =
> > true).
> 
> I think it's better to advertise the feature nevertheless: Otherwise
> the guest might either try some other way of mitigating the
> (believed) vulnerability, or it may report in its logs that it's vulnerable
> (without mitigation) when it really isn't.
> 
> Jan
> 

I can understand that reasoning, but I'd still argue that an additional
option to force guests to use SSBD (like setting ssbd=yes in these
patches) and the default of ssbd=yes allow the guest to turn it off
would be more correct.  I'm not going to be adamant about it though.

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h
  2018-12-06 18:46         ` Andrew Cooper
  2018-12-06 19:25           ` Woods, Brian
@ 2018-12-07 10:17           ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-07 10:17 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 06.12.18 at 19:46, <andrew.cooper3@citrix.com> wrote:
> On 06/12/2018 08:54, Jan Beulich wrote:
>>>>> On 05.12.18 at 18:05, <andrew.cooper3@citrix.com> wrote:
>>> On 05/12/2018 16:57, Jan Beulich wrote:
>>>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>>>> --- a/xen/arch/x86/cpu/amd.c
>>>>> +++ b/xen/arch/x86/cpu/amd.c
>>>>> @@ -419,6 +419,97 @@ static void __init noinline amd_probe_legacy_ssbd(void)
>>>>>  }
>>>>>  
>>>>>  /*
>>>>> + * This is all a gross hack, but Xen really doesn't have flexible-enough
>>>>> + * per-cpu infrastructure to do it properly.  For Zen(v1) with SMT active,
>>>>> + * MSR_AMD64_LS_CFG is per-core rather than per-thread, so we need a per-core
>>>>> + * spinlock to synchronise updates of the MSR.
>>>>> + *
>>>>> + * We can't use per-cpu state because taking one CPU offline would free state
>>>>> + * under the feet of another.  Ideally, we'd allocate memory on the AP boot
>>>>> + * path, but by the time the sibling information is calculated sufficiently
>>>>> + * for us to locate the per-core state, it's too late to fail the AP boot.
>>>>> + *
>>>>> + * We also can't afford to end up in a heterogeneous scenario with some CPUs
>>>>> + * unable to safely use LS_CFG.
>>>>> + *
>>>>> + * Therefore, we have to allocate for the worse-case scenario, which is
>>>>> + * believed to be 4 sockets.  Any allocation failure cause us to turn LS_CFG
>>>>> + * off, as this is fractionally better than failing to boot.
>>>>> + */
>>>>> +static struct ssbd_ls_cfg {
>>>>> +	spinlock_t lock;
>>>>> +	unsigned int disable_count;
>>>>> +} *ssbd_ls_cfg[4];
>>>> Same question as to Brian for his original code: Instead of the
>>>> hard-coding of 4, can't you use nr_sockets here?
>>>> smp_prepare_cpus() runs before pre-SMP initcalls after all.
>>> nr_sockets has zero connection with reality as far as I can tell.
>>>
>>> On this particular box it reports 6 when the correct answer is 2.  I've
>>> got some Intel boxes where nr_sockets reports 15 and the correct answer
>>> is 4.
>> If you look back at when it was introduced, the main goal was
>> for it to never be too low. Any improvements to its calculation
>> are welcome, provided they maintain that guarantee. To high
>> a socket count is imo still better than a hard-coded one.
> 
> Even for the extra 2k of memory it will waste?

2k sounds quite a lot here, but I guess the number depends on
core count. Question is whether you really need to allocate
space for all sockets up front. It would seem to me that instead
you could use a demand allocation scheme where a new
allocation is done only for the first core on a socket. If the
allocation needs to happen before the point where to
(socket,core,thread) tuple is known for the new AP, an
approach like psr_cpu_prepare()'s could be used (allocating a
new structure "just in case" when the previous one was
consumed).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface
  2018-12-06 18:55     ` Andrew Cooper
@ 2018-12-07 10:25       ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2018-12-07 10:25 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 06.12.18 at 19:55, <andrew.cooper3@citrix.com> wrote:
> On 06/12/2018 10:51, Jan Beulich wrote:
>>
>>> +	unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
>>> +	struct ssbd_ls_cfg *cfg;
>>> +	uint64_t val;
>>> +
>>> +	ASSERT(cpu_has_legacy_ssbd);
>>> +
>>> +	/*
>>> +	 * Update hardware lazily, as these MSRs are expensive.  However, on
>>> +	 * the boot paths which pass NULL, force a write to set a consistent
>>> +	 * initial state.
>>> +	 */
>>> +	if (*this_ssbd == disable && next)
>>> +		return;
>>> +
>>> +	if (cpu_has_virt_sc_ssbd) {
>>> +		wrmsrl(MSR_VIRT_SPEC_CTRL,
>>> +		       disable ? SPEC_CTRL_SSBD : 0);
>>> +		goto done;
>>> +	}
>>> +
>>> +	val = ls_cfg_base | (disable ? ls_cfg_ssbd_mask : 0);
>>> +
>>> +	if (c->x86 < 0x17 || c->x86_num_siblings == 1) {
>>> +		/* No threads to be concerned with. */
>>> +		wrmsrl(MSR_AMD64_LS_CFG, val);
>>> +		goto done;
>>> +	}
>>> +
>>> +	/* Check that we won't overflow the worse-case allocation. */
>>> +	BUG_ON(socket >= ARRAY_SIZE(ssbd_ls_cfg));
>>> +	BUG_ON(core   >= ssbd_max_cores);
>> Wouldn't it be better to fail onlining of such CPUs?
> 
> How?  We've not currently got an ability to fail in the middle of
> start_secondary(), which is why the previous patch really does go an
> allocate the worst case.

smp_callin() very clearly has failure paths, and that's being
called out of start_secondary(). If you look there you'll notice
that it wasn't all that long ago that we've added a second
failure path here besides the HVM enabling one (which has been
there virtually forever).

>>> +	cfg = &ssbd_ls_cfg[socket][core];
>>> +
>>> +	if (disable) {
>>> +		spin_lock(&cfg->lock);
>>> +
>>> +		/* First sibling to disable updates hardware. */
>>> +		if (!cfg->disable_count)
>>> +			wrmsrl(MSR_AMD64_LS_CFG, val);
>>> +		cfg->disable_count++;
>>> +
>>> +		spin_unlock(&cfg->lock);
>>> +	} else {
>>> +		spin_lock(&cfg->lock);
>>> +
>>> +		/* Last sibling to enable updates hardware. */
>>> +		cfg->disable_count--;
>>> +		if (!cfg->disable_count)
>>> +			wrmsrl(MSR_AMD64_LS_CFG, val);
>>> +
>>> +		spin_unlock(&cfg->lock);
>>> +	}
>> Any reason for duplicating the spin_{,un}lock() calls?
> 
> To avoid having a context-dependent jump in the critical region.  Then
> again, I suppose that is completely dwarfed by the WRMSR.

If you're afraid of extra branches, how about

	spin_lock(&cfg->lock);

	cfg->disable_count -= !disable;

	/* First sibling to disable and last sibling to enable updates hardware. */
	if (!cfg->disable_count)
		wrmsrl(MSR_AMD64_LS_CFG, val);

	cfg->disable_count += disable;

	spin_unlock(&cfg->lock);

(which I'd very much hope the compiler carries out with just
the single unavoidable branch in the middle)?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-06 10:59   ` Jan Beulich
@ 2018-12-28 16:30     ` Andrew Cooper
  2019-01-04  8:58       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Cooper @ 2018-12-28 16:30 UTC (permalink / raw)
  To: Jan Beulich, Xen-devel; +Cc: Wei Liu, Brian Woods, Roger Pau Monne

On 06/12/2018 10:59, Jan Beulich wrote:
>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/include/asm-x86/cpufeatures.h
>> +++ b/xen/include/asm-x86/cpufeatures.h
>> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen it
>>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
>>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
>>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
>> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or VIRT_SPEC_CTRL available for SSBD */
> As already indicated in another reply, with there not being
> any alternatives patching based on this, I don't think it should
> be a synthetic feature. Use an ordinary bool variable instead.

What makes you believe that synthetic bits are only for livepatching? 
Less than half of the existing ones are used for that purpose, and its
not how Linux treats them.

They are a collection of misc unsorted feature bits, and that's exactly
what I'm using it for here.

Furthermore, having looked at the Hygon series, this logic is needed
there as well, so I'll be leaving this code in its current form (while
adjusting other bits to be less AMD specific).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot
  2018-12-28 16:30     ` Andrew Cooper
@ 2019-01-04  8:58       ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2019-01-04  8:58 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Xen-devel, Brian Woods, Roger Pau Monne

>>> On 28.12.18 at 17:30, <andrew.cooper3@citrix.com> wrote:
> On 06/12/2018 10:59, Jan Beulich wrote:
>>>>> On 03.12.18 at 17:18, <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/include/asm-x86/cpufeatures.h
>>> +++ b/xen/include/asm-x86/cpufeatures.h
>>> @@ -25,6 +25,7 @@ XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* 
> SMAP gets used by Xen it
>>>  XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as 
> Dispatch Serialising */
>>>  XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use 
> IND_THUNK_LFENCE */
>>>  XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP 
> */
>>> +XEN_CPUFEATURE(LEGACY_SSBD,     (FSCAPINTS+0)*32+15) /* LS_CFG or 
> VIRT_SPEC_CTRL available for SSBD */
>> As already indicated in another reply, with there not being
>> any alternatives patching based on this, I don't think it should
>> be a synthetic feature. Use an ordinary bool variable instead.
> 
> What makes you believe that synthetic bits are only for livepatching? 
> Less than half of the existing ones are used for that purpose, and its
> not how Linux treats them.
> 
> They are a collection of misc unsorted feature bits, and that's exactly
> what I'm using it for here.

I'm convinced the introduction of synthetic bits in Linux was
for the purpose of alternatives patching, as anything else can
be expressed using ordinary variables. Even if Linux later went
and abused the construct, I don't think we should follow that
model; I'd actually aim at removing synthetic features which
aren't used for alternatives patching, as you can see from e.g.
56e619b4d3 ("x86: drop NO_XPTI synthetic feature").

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2019-01-04  8:58 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-03 16:18 [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Andrew Cooper
2018-12-03 16:18 ` [PATCH 1/9] x86/spec-ctrl: Drop the bti= command line option Andrew Cooper
2018-12-04 16:19   ` Jan Beulich
2018-12-03 16:18 ` [PATCH 2/9] x86/cpuid: Drop the synthetic X86_FEATURE_XEN_IBPB Andrew Cooper
2018-12-04 16:21   ` Jan Beulich
2018-12-03 16:18 ` [PATCH 3/9] x86/cpuid: Extend the cpuid= command line option to support all named features Andrew Cooper
2018-12-04 16:28   ` Jan Beulich
2018-12-06 12:52   ` Wei Liu
2018-12-03 16:18 ` [PATCH 4/9] x86/amd: Introduce CPUID/MSR definitions for per-vcpu SSBD support Andrew Cooper
2018-12-04 16:06   ` Woods, Brian
2018-12-05 16:39   ` Jan Beulich
2018-12-05 17:50     ` Andrew Cooper
2018-12-06  8:49       ` Jan Beulich
2018-12-06 18:35         ` Andrew Cooper
2018-12-03 16:18 ` [PATCH 5/9] x86/amd: Probe for legacy SSBD interfaces on boot Andrew Cooper
2018-12-04 16:15   ` Woods, Brian
2018-12-05 16:50   ` Jan Beulich
2018-12-05 17:09     ` Andrew Cooper
2018-12-06  8:53       ` Jan Beulich
2018-12-06 10:59   ` Jan Beulich
2018-12-28 16:30     ` Andrew Cooper
2019-01-04  8:58       ` Jan Beulich
2018-12-03 16:18 ` [PATCH 6/9] x86/amd: Allocate resources to cope with LS_CFG being per-core on Fam17h Andrew Cooper
2018-12-04 16:38   ` Woods, Brian
2018-12-05 16:57   ` Jan Beulich
2018-12-05 17:05     ` Andrew Cooper
2018-12-06  8:54       ` Jan Beulich
2018-12-06 18:46         ` Andrew Cooper
2018-12-06 19:25           ` Woods, Brian
2018-12-07 10:17           ` Jan Beulich
2018-12-03 16:18 ` [PATCH 7/9] x86/amd: Support context switching legacy SSBD interface Andrew Cooper
2018-12-04 20:27   ` Woods, Brian
2018-12-06 10:51   ` Jan Beulich
2018-12-06 18:55     ` Andrew Cooper
2018-12-07 10:25       ` Jan Beulich
2018-12-03 16:18 ` [PATCH 8/9] x86/amd: Virtualise MSR_VIRT_SPEC_CTRL for guests Andrew Cooper
2018-12-04 21:35   ` Woods, Brian
2018-12-05  8:41     ` Jan Beulich
2018-12-05 19:09       ` Andrew Cooper
2018-12-06  8:59         ` Jan Beulich
2018-12-06 19:41       ` Woods, Brian
2018-12-06 10:55   ` Jan Beulich
2018-12-03 16:18 ` [PATCH 9/9] x86/amd: Offer MSR_VIRT_SPEC_CTRL to guests Andrew Cooper
2018-12-06 10:57   ` Jan Beulich
2018-12-03 16:24 ` [PATCH 0/9] xen/amd: Support for guest MSR_VIRT_SPEC_CTRL support Jan Beulich
2018-12-03 16:31   ` Andrew Cooper
2018-12-04  9:45     ` Jan Beulich
2018-12-04 11:26       ` Andrew Cooper
2018-12-04 12:45         ` Jan Beulich
2018-12-04 13:41           ` Andrew Cooper
2018-12-04 14:07             ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).