All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition
@ 2022-01-17 18:34 Andrew Cooper
  2022-01-17 18:34 ` [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Andrew Cooper
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 18:34 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

v1 had an irritating breakage with VM migration, caused by the accessor logic
moving out of guest_{rd,wr}msr().  v2 takes an approach I'd previously put off
to one side, but which appears to be the least invasive way forward.

Andrew Cooper (4):
  x86/guest: Introduce {get,set}_reg() infrastructure
  x86/msr: Split MSR_SPEC_CTRL handling
  x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM
  x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling

 xen/arch/x86/hvm/hvm.c                   | 22 +++++++++
 xen/arch/x86/hvm/svm/entry.S             |  5 +-
 xen/arch/x86/hvm/svm/svm.c               | 30 ++++++++++++
 xen/arch/x86/hvm/vmx/entry.S             | 23 +++++++---
 xen/arch/x86/hvm/vmx/vmx.c               | 78 +++++++++++++++++++++++++++++++-
 xen/arch/x86/include/asm/hvm/hvm.h       | 24 ++++++++++
 xen/arch/x86/include/asm/msr.h           | 10 +++-
 xen/arch/x86/include/asm/pv/domain.h     | 13 ++++++
 xen/arch/x86/include/asm/spec_ctrl_asm.h | 51 ++++-----------------
 xen/arch/x86/msr.c                       | 21 +++++++--
 xen/arch/x86/pv/emulate.c                | 40 ++++++++++++++++
 11 files changed, 259 insertions(+), 58 deletions(-)

-- 
2.11.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure
  2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
@ 2022-01-17 18:34 ` Andrew Cooper
  2022-01-19 13:28   ` Jan Beulich
  2022-01-17 18:34 ` [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling Andrew Cooper
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 18:34 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

Various registers have per-guest-type or per-vendor locations or access
requirements.  To support their use from common code, provide accessors which
allow for per-guest-type behaviour.

For now, just infrastructure handling default cases and expectations.
Subsequent patches will start handling registers using this infrastructure.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

It is deliberately {get,set}_reg() because in the fullness of time, it will
handle more than just MSRs.  There's loads of space in the MSR index range
which we can reuse for non-MSRs.

v2:
 * New
---
 xen/arch/x86/hvm/hvm.c               | 22 ++++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c           | 30 ++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vmx.c           | 31 +++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/hvm.h   | 24 ++++++++++++++++++++++++
 xen/arch/x86/include/asm/pv/domain.h | 13 +++++++++++++
 xen/arch/x86/pv/emulate.c            | 31 +++++++++++++++++++++++++++++++
 6 files changed, 151 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 3b87506ac4b3..b530e986e86c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3744,6 +3744,28 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
     return X86EMUL_EXCEPTION;
 }
 
+uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg)
+{
+    ASSERT(v == current || !vcpu_runnable(v));
+
+    switch ( reg )
+    {
+    default:
+        return alternative_call(hvm_funcs.get_reg, v, reg);
+    }
+}
+
+void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    ASSERT(v == current || !vcpu_runnable(v));
+
+    switch ( reg )
+    {
+    default:
+        return alternative_vcall(hvm_funcs.set_reg, v, reg, val);
+    }
+}
+
 static bool is_sysdesc_access(const struct x86_emulate_state *state,
                               const struct x86_emulate_ctxt *ctxt)
 {
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index fae39c4b4cbd..bb6b8e560a9f 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2469,6 +2469,33 @@ static bool svm_get_pending_event(struct vcpu *v, struct x86_event *info)
     return true;
 }
 
+static uint64_t svm_get_reg(struct vcpu *v, unsigned int reg)
+{
+    struct domain *d = v->domain;
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
+               __func__, v, reg);
+        domain_crash(d);
+        return 0;
+    }
+}
+
+static void svm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    struct domain *d = v->domain;
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
+               __func__, v, reg, val);
+        domain_crash(d);
+    }
+}
+
 static struct hvm_function_table __initdata svm_function_table = {
     .name                 = "SVM",
     .cpu_up_prepare       = svm_cpu_up_prepare,
@@ -2518,6 +2545,9 @@ static struct hvm_function_table __initdata svm_function_table = {
     .nhvm_intr_blocked = nsvm_intr_blocked,
     .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 
+    .get_reg = svm_get_reg,
+    .set_reg = svm_set_reg,
+
     .tsc_scaling = {
         .max_ratio = ~TSC_RATIO_RSVD_BITS,
     },
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a7a0d662342a..4ff92ab4e94e 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2404,6 +2404,33 @@ static int vmtrace_reset(struct vcpu *v)
     return 0;
 }
 
+static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
+{
+    struct domain *d = v->domain;
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
+               __func__, v, reg);
+        domain_crash(d);
+        return 0;
+    }
+}
+
+static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    struct domain *d = v->domain;
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
+               __func__, v, reg, val);
+        domain_crash(d);
+    }
+}
+
 static struct hvm_function_table __initdata vmx_function_table = {
     .name                 = "VMX",
     .cpu_up_prepare       = vmx_cpu_up_prepare,
@@ -2464,6 +2491,10 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .vmtrace_set_option = vmtrace_set_option,
     .vmtrace_get_option = vmtrace_get_option,
     .vmtrace_reset = vmtrace_reset,
+
+    .get_reg = vmx_get_reg,
+    .set_reg = vmx_set_reg,
+
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index b26302d9e769..c8b62b514b42 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -223,6 +223,9 @@ struct hvm_function_table {
     int (*vmtrace_get_option)(struct vcpu *v, uint64_t key, uint64_t *value);
     int (*vmtrace_reset)(struct vcpu *v);
 
+    uint64_t (*get_reg)(struct vcpu *v, unsigned int reg);
+    void (*set_reg)(struct vcpu *v, unsigned int reg, uint64_t val);
+
     /*
      * Parameters and callbacks for hardware-assisted TSC scaling,
      * which are valid only when the hardware feature is available.
@@ -730,6 +733,18 @@ static inline int hvm_vmtrace_reset(struct vcpu *v)
 }
 
 /*
+ * Accessors for registers which have per-guest-type or per-vendor locations
+ * (e.g. VMCS, msr load/save lists, VMCB, VMLOAD lazy, etc).
+ *
+ * The caller is responsible for all auditing - these accessors do not fail,
+ * but do use domain_crash() for usage errors.
+ *
+ * Must cope with being called in non-current context.
+ */
+uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg);
+void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val);
+
+/*
  * This must be defined as a macro instead of an inline function,
  * because it uses 'struct vcpu' and 'struct domain' which have
  * not been defined yet.
@@ -852,6 +867,15 @@ static inline int hvm_vmtrace_get_option(
     return -EOPNOTSUPP;
 }
 
+static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
+{
+    ASSERT_UNREACHABLE();
+}
+static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    ASSERT_UNREACHABLE();
+}
+
 #define is_viridian_domain(d) ((void)(d), false)
 #define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
diff --git a/xen/arch/x86/include/asm/pv/domain.h b/xen/arch/x86/include/asm/pv/domain.h
index df9716ff26a8..5fbf4043e0d9 100644
--- a/xen/arch/x86/include/asm/pv/domain.h
+++ b/xen/arch/x86/include/asm/pv/domain.h
@@ -72,6 +72,10 @@ int pv_vcpu_initialise(struct vcpu *v);
 void pv_domain_destroy(struct domain *d);
 int pv_domain_initialise(struct domain *d);
 
+/* See hvm_{get,set}_reg() for description. */
+uint64_t pv_get_reg(struct vcpu *v, unsigned int reg);
+void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val);
+
 /*
  * Bits which a PV guest can toggle in its view of cr4.  Some are loaded into
  * hardware, while some are fully emulated.
@@ -100,6 +104,15 @@ static inline int pv_vcpu_initialise(struct vcpu *v) { return -EOPNOTSUPP; }
 static inline void pv_domain_destroy(struct domain *d) {}
 static inline int pv_domain_initialise(struct domain *d) { return -EOPNOTSUPP; }
 
+static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
+{
+    ASSERT_UNREACHABLE();
+}
+static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    ASSERT_UNREACHABLE();
+}
+
 static inline unsigned long pv_make_cr4(const struct vcpu *v) { return ~0ul; }
 
 #endif	/* CONFIG_PV */
diff --git a/xen/arch/x86/pv/emulate.c b/xen/arch/x86/pv/emulate.c
index e8bb326efdfe..ae049b60f2fc 100644
--- a/xen/arch/x86/pv/emulate.c
+++ b/xen/arch/x86/pv/emulate.c
@@ -90,6 +90,37 @@ void pv_emul_instruction_done(struct cpu_user_regs *regs, unsigned long rip)
     }
 }
 
+uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
+{
+    struct domain *d = v->domain;
+
+    ASSERT(v == current || !vcpu_runnable(v));
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
+               __func__, v, reg);
+        domain_crash(d);
+        return 0;
+    }
+}
+
+void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
+{
+    struct domain *d = v->domain;
+
+    ASSERT(v == current || !vcpu_runnable(v));
+
+    switch ( reg )
+    {
+    default:
+        printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
+               __func__, v, reg, val);
+        domain_crash(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling
  2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
  2022-01-17 18:34 ` [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Andrew Cooper
@ 2022-01-17 18:34 ` Andrew Cooper
  2022-01-19 13:30   ` Jan Beulich
  2022-01-17 18:34 ` [PATCH v2 3/4] x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM Andrew Cooper
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 18:34 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu

In order to fix a VT-x bug, and support MSR_SPEC_CTRL on AMD, move
MSR_SPEC_CTRL handling into the new {get,set}_reg() infrastructure.

Duplicate the msrs->spec_ctrl.raw accesses in the PV and VT-x paths for now.
The SVM path is currently unreachable because of the CPUID policy.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Rework on top of {get,set}_reg()
---
 xen/arch/x86/hvm/vmx/vmx.c |  7 +++++++
 xen/arch/x86/msr.c         | 21 +++++++++++++++++----
 xen/arch/x86/pv/emulate.c  |  9 +++++++++
 3 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4ff92ab4e94e..c32967f190ff 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2410,6 +2410,9 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
 
     switch ( reg )
     {
+    case MSR_SPEC_CTRL:
+        return v->arch.msrs->spec_ctrl.raw;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
                __func__, v, reg);
@@ -2424,6 +2427,10 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 
     switch ( reg )
     {
+    case MSR_SPEC_CTRL:
+        v->arch.msrs->spec_ctrl.raw = val;
+        break;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
                __func__, v, reg, val);
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index b834456c7b02..fd4012808472 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -28,6 +28,7 @@
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/viridian.h>
 #include <asm/msr.h>
+#include <asm/pv/domain.h>
 #include <asm/setup.h>
 
 #include <public/hvm/params.h>
@@ -265,8 +266,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     case MSR_SPEC_CTRL:
         if ( !cp->feat.ibrsb )
             goto gp_fault;
-        *val = msrs->spec_ctrl.raw;
-        break;
+        goto get_reg;
 
     case MSR_INTEL_PLATFORM_INFO:
         *val = mp->platform_info.raw;
@@ -424,6 +424,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 
     return ret;
 
+ get_reg: /* Delegate register access to per-vm-type logic. */
+    if ( is_pv_domain(d) )
+        *val = pv_get_reg(v, msr);
+    else
+        *val = hvm_get_reg(v, msr);
+    return X86EMUL_OKAY;
+
  gp_fault:
     return X86EMUL_EXCEPTION;
 }
@@ -514,8 +521,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         if ( val & rsvd )
             goto gp_fault; /* Rsvd bit set? */
 
-        msrs->spec_ctrl.raw = val;
-        break;
+        goto set_reg;
 
     case MSR_PRED_CMD:
         if ( !cp->feat.ibrsb && !cp->extd.ibpb )
@@ -663,6 +669,13 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
     return ret;
 
+ set_reg: /* Delegate register access to per-vm-type logic. */
+    if ( is_pv_domain(d) )
+        pv_set_reg(v, msr, val);
+    else
+        hvm_set_reg(v, msr, val);
+    return X86EMUL_OKAY;
+
  gp_fault:
     return X86EMUL_EXCEPTION;
 }
diff --git a/xen/arch/x86/pv/emulate.c b/xen/arch/x86/pv/emulate.c
index ae049b60f2fc..0a7907ec5e84 100644
--- a/xen/arch/x86/pv/emulate.c
+++ b/xen/arch/x86/pv/emulate.c
@@ -92,12 +92,16 @@ void pv_emul_instruction_done(struct cpu_user_regs *regs, unsigned long rip)
 
 uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
 {
+    const struct vcpu_msrs *msrs = v->arch.msrs;
     struct domain *d = v->domain;
 
     ASSERT(v == current || !vcpu_runnable(v));
 
     switch ( reg )
     {
+    case MSR_SPEC_CTRL:
+        return msrs->spec_ctrl.raw;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
                __func__, v, reg);
@@ -108,12 +112,17 @@ uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
 
 void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 {
+    struct vcpu_msrs *msrs = v->arch.msrs;
     struct domain *d = v->domain;
 
     ASSERT(v == current || !vcpu_runnable(v));
 
     switch ( reg )
     {
+    case MSR_SPEC_CTRL:
+        msrs->spec_ctrl.raw = val;
+        break;
+
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n",
                __func__, v, reg, val);
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/4] x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM
  2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
  2022-01-17 18:34 ` [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Andrew Cooper
  2022-01-17 18:34 ` [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling Andrew Cooper
@ 2022-01-17 18:34 ` Andrew Cooper
  2022-01-17 18:34 ` [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Andrew Cooper
  2022-01-17 19:25 ` [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead Andrew Cooper
  4 siblings, 0 replies; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 18:34 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

These were written before Spectre/Meltdown went public, and there was large
uncertainty in how the protections would evolve.  As it turns out, they're
very specific to Intel hardware, and not very suitable for AMD.

Drop the macros, opencoding the relevant subset of functionality, and leaving
grep-fodder to locate the logic.  No change at all for VT-x.

For AMD, the only relevant piece of functionality is DO_OVERWRITE_RSB,
although we will soon be adding (different) logic to handle MSR_SPEC_CTRL.

This has a marginal improvement of removing an unconditional pile of long-nops
from the vmentry/exit path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

v2:
 * Tweak text.
---
 xen/arch/x86/hvm/svm/entry.S             |  5 +++--
 xen/arch/x86/hvm/vmx/entry.S             |  8 ++++++--
 xen/arch/x86/include/asm/spec_ctrl_asm.h | 19 ++++---------------
 3 files changed, 13 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index e208a4b32ae7..276215d36aff 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -59,7 +59,7 @@ __UNLIKELY_END(nsvm_hap)
         mov VCPUMSR_spec_ctrl_raw(%rax), %eax
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        SPEC_CTRL_EXIT_TO_HVM   /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
+        /* SPEC_CTRL_EXIT_TO_SVM   (nothing currently) */
 
         pop  %r15
         pop  %r14
@@ -86,7 +86,8 @@ __UNLIKELY_END(nsvm_hap)
 
         GET_CURRENT(bx)
 
-        SPEC_CTRL_ENTRY_FROM_HVM    /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
+        /* SPEC_CTRL_ENTRY_FROM_SVM    Req: b=curr %rsp=regs/cpuinfo, Clob: ac  */
+        ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         stgi
diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index 27c8c5ca4943..30139ae58e9d 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -33,7 +33,9 @@ ENTRY(vmx_asm_vmexit_handler)
         movb $1,VCPU_vmx_launched(%rbx)
         mov  %rax,VCPU_hvm_guest_cr2(%rbx)
 
-        SPEC_CTRL_ENTRY_FROM_HVM    /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
+        /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
+        ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
+        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
@@ -80,7 +82,9 @@ UNLIKELY_END(realmode)
         mov VCPUMSR_spec_ctrl_raw(%rax), %eax
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        SPEC_CTRL_EXIT_TO_HVM   /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
+        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
+        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
+        ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
 
         mov  VCPU_hvm_guest_cr2(%rbx),%rax
 
diff --git a/xen/arch/x86/include/asm/spec_ctrl_asm.h b/xen/arch/x86/include/asm/spec_ctrl_asm.h
index cb34299a865b..2b3f123cb501 100644
--- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
+++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
@@ -68,14 +68,16 @@
  *
  * The following ASM fragments implement this algorithm.  See their local
  * comments for further details.
- *  - SPEC_CTRL_ENTRY_FROM_HVM
  *  - SPEC_CTRL_ENTRY_FROM_PV
  *  - SPEC_CTRL_ENTRY_FROM_INTR
  *  - SPEC_CTRL_ENTRY_FROM_INTR_IST
  *  - SPEC_CTRL_EXIT_TO_XEN_IST
  *  - SPEC_CTRL_EXIT_TO_XEN
  *  - SPEC_CTRL_EXIT_TO_PV
- *  - SPEC_CTRL_EXIT_TO_HVM
+ *
+ * Additionally, the following grep-fodder exists to find the HVM logic.
+ *  - SPEC_CTRL_ENTRY_FROM_{SVM,VMX}
+ *  - SPEC_CTRL_EXIT_TO_{SVM,VMX}
  */
 
 .macro DO_OVERWRITE_RSB tmp=rax
@@ -225,12 +227,6 @@
     wrmsr
 .endm
 
-/* Use after a VMEXIT from an HVM guest. */
-#define SPEC_CTRL_ENTRY_FROM_HVM                                        \
-    ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM;           \
-    ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM,                        \
-        X86_FEATURE_SC_MSR_HVM
-
 /* Use after an entry from PV context (syscall/sysenter/int80/int82/etc). */
 #define SPEC_CTRL_ENTRY_FROM_PV                                         \
     ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_PV;            \
@@ -255,13 +251,6 @@
     ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)),           \
         X86_FEATURE_SC_VERW_PV
 
-/* Use when exiting to HVM guest context. */
-#define SPEC_CTRL_EXIT_TO_HVM                                           \
-    ALTERNATIVE "",                                                     \
-        DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM;             \
-    ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)),           \
-        X86_FEATURE_SC_VERW_HVM
-
 /*
  * Use in IST interrupt/exception context.  May interrupt Xen or PV context.
  * Fine grain control of SCF_ist_wrmsr is needed for safety in the S3 resume
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
                   ` (2 preceding siblings ...)
  2022-01-17 18:34 ` [PATCH v2 3/4] x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM Andrew Cooper
@ 2022-01-17 18:34 ` Andrew Cooper
  2022-01-19 13:42   ` Jan Beulich
  2022-01-21  7:56   ` Jan Beulich
  2022-01-17 19:25 ` [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead Andrew Cooper
  4 siblings, 2 replies; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 18:34 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

The logic was based on a mistaken understanding of how NMI blocking on vmexit
works.  NMIs are only blocked for EXIT_REASON_NMI, and not for general exits.
Therefore, an NMI can in general hit early in the vmx_asm_vmexit_handler path,
and the guest's value will be clobbered before it is saved.

Switch to using MSR load/save lists.  This causes the guest value to be saved
atomically with respect to NMIs/MCEs/etc.

First, update vmx_cpuid_policy_changed() to configure the load/save lists at
the same time as configuring the intercepts.  This function is always used in
remote context, so extend the vmx_vmcs_{enter,exit}() block to cover the whole
function, rather than having multiple remote acquisitions of the same VMCS.

Both of vmx_{add,del}_guest_msr() can fail.  The -ESRCH delete case is fine,
but all others are fatal to the running of the VM, so handle them using
domain_crash() - this path is only used during domain construction anyway.

Second, update vmx_{get,set}_reg() to use the MSR load/save lists rather than
vcpu_msrs, and update the vcpu_msrs comment to describe the new state
location.

Finally, adjust the entry/exit asm.

Because the guest value is saved and loaded atomically, we do not need to
manually load the guest value, nor do we need to enable SCF_use_shadow.  This
lets us remove the use of DO_SPEC_CTRL_EXIT_TO_GUEST.  Additionally,
SPEC_CTRL_ENTRY_FROM_PV gets removed too, because on an early entry failure,
we're no longer in the guest MSR_SPEC_CTRL context needing to switch back to
Xen's context.

The only action remaining is to load Xen's MSR_SPEC_CTRL value on vmexit.  We
could in principle use the host msr list, but is expected to complicated
future work.  Delete DO_SPEC_CTRL_ENTRY_FROM_HVM entirely, and use a shorter
code sequence to simply reload Xen's setting from the top-of-stack block.

Adjust the comment at the top of spec_ctrl_asm.h in light of this bugfix.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

Needs backporting as far as people can tolerate.

If the entry/exit logic were in C, I'd ASSERT() that shadow tracking is off,
but this is awkard to arrange in asm.

v2:
 * Rework on top of {get,set}_reg() infrastructure.
 * Future-proof against other vmx_del_guest_msr() failures.
 * Rewrite the commit message to explain things better.
---
 xen/arch/x86/hvm/vmx/entry.S             | 21 +++++++++------
 xen/arch/x86/hvm/vmx/vmx.c               | 44 +++++++++++++++++++++++++++++---
 xen/arch/x86/include/asm/msr.h           | 10 +++++++-
 xen/arch/x86/include/asm/spec_ctrl_asm.h | 32 +++--------------------
 4 files changed, 67 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index 30139ae58e9d..ce7b48558ee1 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
 
         /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
-        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
+
+        .macro restore_spec_ctrl
+            mov    $MSR_SPEC_CTRL, %ecx
+            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
+            xor    %edx, %edx
+            wrmsr
+        .endm
+        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
@@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
         mov VCPUMSR_spec_ctrl_raw(%rax), %eax
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
-        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
+        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
         ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
 
         mov  VCPU_hvm_guest_cr2(%rbx),%rax
@@ -119,12 +125,11 @@ UNLIKELY_END(realmode)
         SAVE_ALL
 
         /*
-         * PV variant needed here as no guest code has executed (so
-         * MSR_SPEC_CTRL can't have changed value), and NMIs/MCEs are liable
-         * to hit (in which case the HVM variant might corrupt things).
+         * SPEC_CTRL_ENTRY notes
+         *
+         * If we end up here, no guest code has executed.  We still have Xen's
+         * choice of MSR_SPEC_CTRL in context, and the RSB is safe.
          */
-        SPEC_CTRL_ENTRY_FROM_PV /* Req: %rsp=regs/cpuinfo Clob: acd */
-        /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         call vmx_vmentry_failure
         jmp  .Lvmx_process_softirqs
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index c32967f190ff..69e38d0fa8f9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -592,6 +592,7 @@ void vmx_update_exception_bitmap(struct vcpu *v)
 static void vmx_cpuid_policy_changed(struct vcpu *v)
 {
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    int rc = 0;
 
     if ( opt_hvm_fep ||
          (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
@@ -601,17 +602,29 @@ static void vmx_cpuid_policy_changed(struct vcpu *v)
 
     vmx_vmcs_enter(v);
     vmx_update_exception_bitmap(v);
-    vmx_vmcs_exit(v);
 
     /*
      * We can safely pass MSR_SPEC_CTRL through to the guest, even if STIBP
      * isn't enumerated in hardware, as SPEC_CTRL_STIBP is ignored.
      */
     if ( cp->feat.ibrsb )
+    {
         vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+
+        rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0);
+        if ( rc )
+            goto out;
+    }
     else
+    {
         vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
 
+        rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST);
+        if ( rc && rc != -ESRCH )
+            goto out;
+        rc = 0; /* Tolerate -ESRCH */
+    }
+
     /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */
     if ( cp->feat.ibrsb || cp->extd.ibpb )
         vmx_clear_msr_intercept(v, MSR_PRED_CMD,  VMX_MSR_RW);
@@ -623,6 +636,15 @@ static void vmx_cpuid_policy_changed(struct vcpu *v)
         vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
     else
         vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+
+ out:
+    vmx_vmcs_exit(v);
+
+    if ( rc )
+    {
+        printk(XENLOG_G_ERR "%pv MSR list error: %d", v, rc);
+        domain_crash(v->domain);
+    }
 }
 
 int vmx_guest_x86_mode(struct vcpu *v)
@@ -2407,11 +2429,20 @@ static int vmtrace_reset(struct vcpu *v)
 static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
 {
     struct domain *d = v->domain;
+    uint64_t val = 0;
+    int rc;
 
     switch ( reg )
     {
     case MSR_SPEC_CTRL:
-        return v->arch.msrs->spec_ctrl.raw;
+        rc = vmx_read_guest_msr(v, reg, &val);
+        if ( rc )
+        {
+            printk(XENLOG_G_ERR "%s(%pv, 0x%08x) MSR list error: %d\n",
+                   __func__, v, reg, rc);
+            domain_crash(d);
+        }
+        return val;
 
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
@@ -2424,11 +2455,18 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
 static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 {
     struct domain *d = v->domain;
+    int rc;
 
     switch ( reg )
     {
     case MSR_SPEC_CTRL:
-        v->arch.msrs->spec_ctrl.raw = val;
+        rc = vmx_write_guest_msr(v, reg, val);
+        if ( rc )
+        {
+            printk(XENLOG_G_ERR "%s(%pv, 0x%08x) MSR list error: %d\n",
+                   __func__, v, reg, rc);
+            domain_crash(d);
+        }
         break;
 
     default:
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 1d3eca9063a2..10039c2d227b 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -287,7 +287,15 @@ extern struct msr_policy     raw_msr_policy,
 /* Container object for per-vCPU MSRs */
 struct vcpu_msrs
 {
-    /* 0x00000048 - MSR_SPEC_CTRL */
+    /*
+     * 0x00000048 - MSR_SPEC_CTRL
+     *
+     * For PV guests, this holds the guest kernel value.  It is accessed on
+     * every entry/exit path.
+     *
+     * For VT-x guests, the guest value is held in the MSR guest load/save
+     * list.
+     */
     struct {
         uint32_t raw;
     } spec_ctrl;
diff --git a/xen/arch/x86/include/asm/spec_ctrl_asm.h b/xen/arch/x86/include/asm/spec_ctrl_asm.h
index 2b3f123cb501..bf82528a12ae 100644
--- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
+++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
@@ -42,9 +42,10 @@
  *     path, or late in the exit path after restoring the guest value.  This
  *     will corrupt the guest value.
  *
- * Factor 1 is dealt with by relying on NMIs/MCEs being blocked immediately
- * after VMEXIT.  The VMEXIT-specific code reads MSR_SPEC_CTRL and updates
- * current before loading Xen's MSR_SPEC_CTRL setting.
+ * Factor 1 is dealt with:
+ *   - On VMX by using MSR load/save lists to have vmentry/exit atomically
+ *     load/save the guest value.  Xen's value is loaded in regular code, and
+ *     there is no need to use the shadow logic (below).
  *
  * Factor 2 is harder.  We maintain a shadow_spec_ctrl value, and a use_shadow
  * boolean in the per cpu spec_ctrl_flags.  The synchronous use is:
@@ -128,31 +129,6 @@
 #endif
 .endm
 
-.macro DO_SPEC_CTRL_ENTRY_FROM_HVM
-/*
- * Requires %rbx=current, %rsp=regs/cpuinfo
- * Clobbers %rax, %rcx, %rdx
- *
- * The common case is that a guest has direct access to MSR_SPEC_CTRL, at
- * which point we need to save the guest value before setting IBRS for Xen.
- * Unilaterally saving the guest value is shorter and faster than checking.
- */
-    mov $MSR_SPEC_CTRL, %ecx
-    rdmsr
-
-    /* Stash the value from hardware. */
-    mov VCPU_arch_msrs(%rbx), %rdx
-    mov %eax, VCPUMSR_spec_ctrl_raw(%rdx)
-    xor %edx, %edx
-
-    /* Clear SPEC_CTRL shadowing *before* loading Xen's value. */
-    andb $~SCF_use_shadow, CPUINFO_spec_ctrl_flags(%rsp)
-
-    /* Load Xen's intended value. */
-    movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
-    wrmsr
-.endm
-
 .macro DO_SPEC_CTRL_ENTRY maybexen:req
 /*
  * Requires %rsp=regs (also cpuinfo if !maybexen)
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead
  2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
                   ` (3 preceding siblings ...)
  2022-01-17 18:34 ` [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Andrew Cooper
@ 2022-01-17 19:25 ` Andrew Cooper
  2022-01-19 13:50   ` Jan Beulich
  4 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-17 19:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

hvm_{get,set}_guest_bndcfgs() are thin wrappers around accessing MSR_BNDCFGS.

MPX was implemented on Skylake uarch CPUs and dropped in subsequent CPUs, and
is disabled by default in Xen VMs.

It would be nice to move all the logic into vmx_msr_{read,write}_intercept(),
but the common HVM migration code uses guest_{rd,wr}msr().  Therefore, use
{get,set}_regs() to reduce the quantity of "common" HVM code.

In lieu of having hvm_set_guest_bndcfgs() split out, use some #ifdef
CONFIG_HVM in guest_wrmsr().  In vmx_{get,set}_regs(), split the switch
statements into two depending on whether the require remote VMCS acquisition
or not.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

This counteracts the hvm_funcs size increase from {get,set}_regs(), and shows
how to use the new functionality to clean the HVM logic up.
---
 xen/arch/x86/hvm/hvm.c             | 37 --------------------------
 xen/arch/x86/hvm/vmx/vmx.c         | 54 ++++++++++++++++++--------------------
 xen/arch/x86/include/asm/hvm/hvm.h | 12 ---------
 xen/arch/x86/msr.c                 | 34 +++++++++++++++++++-----
 4 files changed, 53 insertions(+), 84 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b530e986e86c..d7d3299b431e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -324,43 +324,6 @@ int hvm_set_guest_pat(struct vcpu *v, uint64_t guest_pat)
     return 1;
 }
 
-bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
-{
-    if ( !hvm_funcs.set_guest_bndcfgs ||
-         !is_canonical_address(val) ||
-         (val & IA32_BNDCFGS_RESERVED) )
-        return false;
-
-    /*
-     * While MPX instructions are supposed to be gated on XCR0.BND*, let's
-     * nevertheless force the relevant XCR0 bits on when the feature is being
-     * enabled in BNDCFGS.
-     */
-    if ( (val & IA32_BNDCFGS_ENABLE) &&
-         !(v->arch.xcr0_accum & (X86_XCR0_BNDREGS | X86_XCR0_BNDCSR)) )
-    {
-        uint64_t xcr0 = get_xcr0();
-        int rc;
-
-        if ( v != current )
-            return false;
-
-        rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
-                           xcr0 | X86_XCR0_BNDREGS | X86_XCR0_BNDCSR);
-
-        if ( rc )
-        {
-            HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
-            return false;
-        }
-
-        if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
-            /* nothing, best effort only */;
-    }
-
-    return alternative_call(hvm_funcs.set_guest_bndcfgs, v, val);
-}
-
 /*
  * Get the ratio to scale host TSC frequency to gtsc_khz. zero will be
  * returned if TSC scaling is unavailable or ratio cannot be handled
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 69e38d0fa8f9..8c55e56cbddb 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1212,28 +1212,6 @@ static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat)
     return 1;
 }
 
-static bool vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
-{
-    ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
-
-    vmx_vmcs_enter(v);
-    __vmwrite(GUEST_BNDCFGS, val);
-    vmx_vmcs_exit(v);
-
-    return true;
-}
-
-static bool vmx_get_guest_bndcfgs(struct vcpu *v, u64 *val)
-{
-    ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
-
-    vmx_vmcs_enter(v);
-    __vmread(GUEST_BNDCFGS, val);
-    vmx_vmcs_exit(v);
-
-    return true;
-}
-
 static void vmx_handle_cd(struct vcpu *v, unsigned long value)
 {
     if ( !paging_mode_hap(v->domain) )
@@ -2432,6 +2410,7 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
     uint64_t val = 0;
     int rc;
 
+    /* Logic which doesn't require remote VMCS acquisition. */
     switch ( reg )
     {
     case MSR_SPEC_CTRL:
@@ -2443,13 +2422,25 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg)
             domain_crash(d);
         }
         return val;
+    }
+
+    /* Logic which maybe requires remote VMCS acquisition. */
+    vmx_vmcs_enter(v);
+    switch ( reg )
+    {
+    case MSR_IA32_BNDCFGS:
+        __vmread(GUEST_BNDCFGS, &val);
+        break;
 
     default:
         printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
                __func__, v, reg);
         domain_crash(d);
-        return 0;
+        break;
     }
+    vmx_vmcs_exit(v);
+
+    return val;
 }
 
 static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
@@ -2457,6 +2448,7 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
     struct domain *d = v->domain;
     int rc;
 
+    /* Logic which doesn't require remote VMCS acquisition. */
     switch ( reg )
     {
     case MSR_SPEC_CTRL:
@@ -2467,6 +2459,15 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
                    __func__, v, reg, rc);
             domain_crash(d);
         }
+        return;
+    }
+
+    /* Logic which maybe requires remote VMCS acquisition. */
+    vmx_vmcs_enter(v);
+    switch ( reg )
+    {
+    case MSR_IA32_BNDCFGS:
+        __vmwrite(GUEST_BNDCFGS, val);
         break;
 
     default:
@@ -2474,6 +2475,7 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
                __func__, v, reg, val);
         domain_crash(d);
     }
+    vmx_vmcs_exit(v);
 }
 
 static struct hvm_function_table __initdata vmx_function_table = {
@@ -2796,12 +2798,6 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.tsc_scaling.setup = vmx_setup_tsc_scaling;
     }
 
-    if ( cpu_has_mpx && cpu_has_vmx_mpx )
-    {
-        vmx_function_table.set_guest_bndcfgs = vmx_set_guest_bndcfgs;
-        vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs;
-    }
-
     lbr_tsx_fixup_check();
     ler_to_fixup_check();
 
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index c8b62b514b42..7bb7d0b77d32 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -148,9 +148,6 @@ struct hvm_function_table {
     int  (*get_guest_pat)(struct vcpu *v, u64 *);
     int  (*set_guest_pat)(struct vcpu *v, u64);
 
-    bool (*get_guest_bndcfgs)(struct vcpu *v, u64 *);
-    bool (*set_guest_bndcfgs)(struct vcpu *v, u64);
-
     void (*set_tsc_offset)(struct vcpu *v, u64 offset, u64 at_tsc);
 
     void (*inject_event)(const struct x86_event *event);
@@ -291,8 +288,6 @@ void hvm_set_segment_register(struct vcpu *v, enum x86_segment seg,
 
 void hvm_set_info_guest(struct vcpu *v);
 
-bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val);
-
 int hvm_vmexit_cpuid(struct cpu_user_regs *regs, unsigned int inst_len);
 void hvm_migrate_timers(struct vcpu *v);
 void hvm_do_resume(struct vcpu *v);
@@ -479,12 +474,6 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v)
     return alternative_call(hvm_funcs.get_shadow_gs_base, v);
 }
 
-static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
-{
-    return hvm_funcs.get_guest_bndcfgs &&
-           alternative_call(hvm_funcs.get_guest_bndcfgs, v, val);
-}
-
 #define has_hvm_params(d) \
     ((d)->arch.hvm.params != NULL)
 
@@ -768,7 +757,6 @@ int hvm_guest_x86_mode(struct vcpu *v);
 unsigned long hvm_get_shadow_gs_base(struct vcpu *v);
 void hvm_cpuid_policy_changed(struct vcpu *v);
 void hvm_set_tsc_offset(struct vcpu *v, uint64_t offset, uint64_t at_tsc);
-bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val);
 
 /* End of prototype list */
 
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index fd4012808472..9e22404eb24a 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -30,6 +30,7 @@
 #include <asm/msr.h>
 #include <asm/pv/domain.h>
 #include <asm/setup.h>
+#include <asm/xstate.h>
 
 #include <public/hvm/params.h>
 
@@ -323,10 +324,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         break;
 
     case MSR_IA32_BNDCFGS:
-        if ( !cp->feat.mpx || !is_hvm_domain(d) ||
-             !hvm_get_guest_bndcfgs(v, val) )
+        if ( !cp->feat.mpx ) /* Implies Intel HVM only */
             goto gp_fault;
-        break;
+        goto get_reg;
 
     case MSR_IA32_XSS:
         if ( !cp->xstate.xsaves )
@@ -594,11 +594,33 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         ret = guest_wrmsr_x2apic(v, msr, val);
         break;
 
+#ifdef CONFIG_HVM
     case MSR_IA32_BNDCFGS:
-        if ( !cp->feat.mpx || !is_hvm_domain(d) ||
-             !hvm_set_guest_bndcfgs(v, val) )
+        if ( !cp->feat.mpx || /* Implies Intel HVM only */
+             !is_canonical_address(val) || (val & IA32_BNDCFGS_RESERVED) )
             goto gp_fault;
-        break;
+
+        /*
+         * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+         * nevertheless force the relevant XCR0 bits on when the feature is
+         * being enabled in BNDCFGS.
+         */
+        if ( (val & IA32_BNDCFGS_ENABLE) &&
+             !(v->arch.xcr0_accum & (X86_XCR0_BNDREGS | X86_XCR0_BNDCSR)) )
+        {
+            uint64_t xcr0 = get_xcr0();
+
+            if ( v != current ||
+                 handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+                               xcr0 | X86_XCR0_BNDREGS | X86_XCR0_BNDCSR) )
+                goto gp_fault;
+
+            if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+                /* nothing, best effort only */;
+        }
+
+        goto set_reg;
+#endif /* CONFIG_HVM */
 
     case MSR_IA32_XSS:
         if ( !cp->xstate.xsaves )
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure
  2022-01-17 18:34 ` [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Andrew Cooper
@ 2022-01-19 13:28   ` Jan Beulich
  2022-01-19 18:00     ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-01-19 13:28 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 17.01.2022 19:34, Andrew Cooper wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3744,6 +3744,28 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
>      return X86EMUL_EXCEPTION;
>  }
>  
> +uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg)
> +{
> +    ASSERT(v == current || !vcpu_runnable(v));
> +
> +    switch ( reg )
> +    {
> +    default:
> +        return alternative_call(hvm_funcs.get_reg, v, reg);
> +    }
> +}
> +
> +void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
> +{
> +    ASSERT(v == current || !vcpu_runnable(v));
> +
> +    switch ( reg )
> +    {
> +    default:
> +        return alternative_vcall(hvm_funcs.set_reg, v, reg, val);

I'm inclined to ask to drop "return" from here.

Also, for both functions, without it being clear for what kind of
registers beyond MSRs this may want using down the road, I wonder
whether uint64_t is actually wide enough.

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2469,6 +2469,33 @@ static bool svm_get_pending_event(struct vcpu *v, struct x86_event *info)
>      return true;
>  }
>  
> +static uint64_t svm_get_reg(struct vcpu *v, unsigned int reg)
> +{
> +    struct domain *d = v->domain;
> +
> +    switch ( reg )
> +    {
> +    default:
> +        printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
> +               __func__, v, reg);

Is __func__ actually of much use here and in similar further places?

> @@ -852,6 +867,15 @@ static inline int hvm_vmtrace_get_option(
>      return -EOPNOTSUPP;
>  }
>  
> +static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
> +{
> +    ASSERT_UNREACHABLE();
> +}
> +static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
> +{
> +    ASSERT_UNREACHABLE();
> +}

Were these meant to have hvm_ prefixes?

With at least this last aspect addressed
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling
  2022-01-17 18:34 ` [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling Andrew Cooper
@ 2022-01-19 13:30   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-01-19 13:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel

On 17.01.2022 19:34, Andrew Cooper wrote:
> In order to fix a VT-x bug, and support MSR_SPEC_CTRL on AMD, move
> MSR_SPEC_CTRL handling into the new {get,set}_reg() infrastructure.
> 
> Duplicate the msrs->spec_ctrl.raw accesses in the PV and VT-x paths for now.
> The SVM path is currently unreachable because of the CPUID policy.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-17 18:34 ` [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Andrew Cooper
@ 2022-01-19 13:42   ` Jan Beulich
  2022-01-19 17:00     ` Andrew Cooper
  2022-01-21  7:56   ` Jan Beulich
  1 sibling, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-01-19 13:42 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 17.01.2022 19:34, Andrew Cooper wrote:
> --- a/xen/arch/x86/hvm/vmx/entry.S
> +++ b/xen/arch/x86/hvm/vmx/entry.S
> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>  
>          /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
> -        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
> +
> +        .macro restore_spec_ctrl
> +            mov    $MSR_SPEC_CTRL, %ecx
> +            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
> +            xor    %edx, %edx
> +            wrmsr
> +        .endm
> +        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>  
>          /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
> @@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
>          mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>  
>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
> -        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
> -        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
> +        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
>          ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM

I notice you did update this clobber remark, but what about the one further
up in context?

> --- a/xen/arch/x86/include/asm/msr.h
> +++ b/xen/arch/x86/include/asm/msr.h
> @@ -287,7 +287,15 @@ extern struct msr_policy     raw_msr_policy,
>  /* Container object for per-vCPU MSRs */
>  struct vcpu_msrs
>  {
> -    /* 0x00000048 - MSR_SPEC_CTRL */
> +    /*
> +     * 0x00000048 - MSR_SPEC_CTRL
> +     *
> +     * For PV guests, this holds the guest kernel value.  It is accessed on
> +     * every entry/exit path.
> +     *
> +     * For VT-x guests, the guest value is held in the MSR guest load/save
> +     * list.
> +     */
>      struct {
>          uint32_t raw;
>      } spec_ctrl;

To stand a chance of noticing bad use of this field for VT-x guests, would
it make sense to store some sentinel value into this field for all involved
vCPU-s?

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead
  2022-01-17 19:25 ` [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead Andrew Cooper
@ 2022-01-19 13:50   ` Jan Beulich
  2022-01-19 16:53     ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-01-19 13:50 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 17.01.2022 20:25, Andrew Cooper wrote:
> hvm_{get,set}_guest_bndcfgs() are thin wrappers around accessing MSR_BNDCFGS.
> 
> MPX was implemented on Skylake uarch CPUs and dropped in subsequent CPUs, and
> is disabled by default in Xen VMs.
> 
> It would be nice to move all the logic into vmx_msr_{read,write}_intercept(),
> but the common HVM migration code uses guest_{rd,wr}msr().  Therefore, use
> {get,set}_regs() to reduce the quantity of "common" HVM code.
> 
> In lieu of having hvm_set_guest_bndcfgs() split out, use some #ifdef
> CONFIG_HVM in guest_wrmsr().  In vmx_{get,set}_regs(), split the switch
> statements into two depending on whether the require remote VMCS acquisition
> or not.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

One remark:

> @@ -323,10 +324,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          break;
>  
>      case MSR_IA32_BNDCFGS:
> -        if ( !cp->feat.mpx || !is_hvm_domain(d) ||
> -             !hvm_get_guest_bndcfgs(v, val) )
> +        if ( !cp->feat.mpx ) /* Implies Intel HVM only */

Wouldn't it make sense to accompany this comment by ...

>              goto gp_fault;
> -        break;

    ASSERT(is_hvm_domain(d));

(and then the same on the "set" path)?

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead
  2022-01-19 13:50   ` Jan Beulich
@ 2022-01-19 16:53     ` Andrew Cooper
  2022-01-20  8:20       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-19 16:53 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 19/01/2022 13:50, Jan Beulich wrote:
> On 17.01.2022 20:25, Andrew Cooper wrote:
>> hvm_{get,set}_guest_bndcfgs() are thin wrappers around accessing MSR_BNDCFGS.
>>
>> MPX was implemented on Skylake uarch CPUs and dropped in subsequent CPUs, and
>> is disabled by default in Xen VMs.
>>
>> It would be nice to move all the logic into vmx_msr_{read,write}_intercept(),
>> but the common HVM migration code uses guest_{rd,wr}msr().  Therefore, use
>> {get,set}_regs() to reduce the quantity of "common" HVM code.
>>
>> In lieu of having hvm_set_guest_bndcfgs() split out, use some #ifdef
>> CONFIG_HVM in guest_wrmsr().  In vmx_{get,set}_regs(), split the switch
>> statements into two depending on whether the require remote VMCS acquisition
>> or not.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> One remark:
>
>> @@ -323,10 +324,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>          break;
>>  
>>      case MSR_IA32_BNDCFGS:
>> -        if ( !cp->feat.mpx || !is_hvm_domain(d) ||
>> -             !hvm_get_guest_bndcfgs(v, val) )
>> +        if ( !cp->feat.mpx ) /* Implies Intel HVM only */
> Wouldn't it make sense to accompany this comment by ...
>
>>              goto gp_fault;
>> -        break;
>     ASSERT(is_hvm_domain(d));
>
> (and then the same on the "set" path)?

So this is the reason for the default logic in the {get,set}_reg()
path.  The absence of MSR_BNDCFGS in the PV and SVM paths will cause the
VM to be crashed cleanly.  If you're on a VMX on a non-MPX capable
system, the VMREAD/VMWRITE will hit a BUG (which in due course I want to
downgrade to a domain crash).

It's a bit more friendly than an ASSERT() (doesn't take the system
down), is present in release builds too, and more precise as it excludes
SVM too.

~Andrew

P.S. I'm still trying to decide on an acceptable name to hide {
ASSERT_UNREACHABLE(); gprink(); domain_crash() } behind, so we can
downgrade more BUG()/etc to more runtime-friendly options.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-19 13:42   ` Jan Beulich
@ 2022-01-19 17:00     ` Andrew Cooper
  2022-01-20  8:14       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-19 17:00 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 19/01/2022 13:42, Jan Beulich wrote:
> On 17.01.2022 19:34, Andrew Cooper wrote:
>> --- a/xen/arch/x86/hvm/vmx/entry.S
>> +++ b/xen/arch/x86/hvm/vmx/entry.S
>> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>>  
>>          /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
>> -        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
>> +
>> +        .macro restore_spec_ctrl
>> +            mov    $MSR_SPEC_CTRL, %ecx
>> +            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
>> +            xor    %edx, %edx
>> +            wrmsr
>> +        .endm
>> +        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>>  
>>          /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
>> @@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
>>          mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>>  
>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>> -        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
>> -        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
>> +        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
>>          ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
> I notice you did update this clobber remark, but what about the one further
> up in context?

What about it?  It still clobbers %eax, %ecx and %edx.

>> --- a/xen/arch/x86/include/asm/msr.h
>> +++ b/xen/arch/x86/include/asm/msr.h
>> @@ -287,7 +287,15 @@ extern struct msr_policy     raw_msr_policy,
>>  /* Container object for per-vCPU MSRs */
>>  struct vcpu_msrs
>>  {
>> -    /* 0x00000048 - MSR_SPEC_CTRL */
>> +    /*
>> +     * 0x00000048 - MSR_SPEC_CTRL
>> +     *
>> +     * For PV guests, this holds the guest kernel value.  It is accessed on
>> +     * every entry/exit path.
>> +     *
>> +     * For VT-x guests, the guest value is held in the MSR guest load/save
>> +     * list.
>> +     */
>>      struct {
>>          uint32_t raw;
>>      } spec_ctrl;
> To stand a chance of noticing bad use of this field for VT-x guests, would
> it make sense to store some sentinel value into this field for all involved
> vCPU-s?

The usage is going to get more complicated before we're done.  I'd like
to wait until the churn reduces before applying invariants like that.

~Andrew


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure
  2022-01-19 13:28   ` Jan Beulich
@ 2022-01-19 18:00     ` Andrew Cooper
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Cooper @ 2022-01-19 18:00 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 19/01/2022 13:28, Jan Beulich wrote:
> On 17.01.2022 19:34, Andrew Cooper wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3744,6 +3744,28 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
>>      return X86EMUL_EXCEPTION;
>>  }
>>  
>> +uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg)
>> +{
>> +    ASSERT(v == current || !vcpu_runnable(v));
>> +
>> +    switch ( reg )
>> +    {
>> +    default:
>> +        return alternative_call(hvm_funcs.get_reg, v, reg);
>> +    }
>> +}
>> +
>> +void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>> +{
>> +    ASSERT(v == current || !vcpu_runnable(v));
>> +
>> +    switch ( reg )
>> +    {
>> +    default:
>> +        return alternative_vcall(hvm_funcs.set_reg, v, reg, val);
> I'm inclined to ask to drop "return" from here.

It's a tossup between this, and a following break.  I was guestimating
based on the subsequent patches, because there is isn't a plausible use
for common logic following the switch statement.

> Also, for both functions, without it being clear for what kind of
> registers beyond MSRs this may want using down the road, I wonder
> whether uint64_t is actually wide enough.

The tsc scaling/offset registers will probably be the easiest to move,
because they're driven almost exclusively from common code. 
nhvm_vcpu_p2m_base() too, because it is only read, and is trivial.

cr2 would be easy example, except it's probably not useful to split out
of the general cr paths.

Another MSR example which is simple to move (and drop hooks) is
get_shadow_gs_base().


The segment registers are the only obvious examples which don't fit into
uint64_t.

As a tangent, code generation for get/set_sreg() would probably be far
better if get() returned by value, and set() took by value.  struct
segment_register is only a pair of registers, and the optimiser can
probably keep most callsites from spilling to the stack.

>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2469,6 +2469,33 @@ static bool svm_get_pending_event(struct vcpu *v, struct x86_event *info)
>>      return true;
>>  }
>>  
>> +static uint64_t svm_get_reg(struct vcpu *v, unsigned int reg)
>> +{
>> +    struct domain *d = v->domain;
>> +
>> +    switch ( reg )
>> +    {
>> +    default:
>> +        printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n",
>> +               __func__, v, reg);
> Is __func__ actually of much use here and in similar further places?

Yes.  Admittedly moreso before I added the domain_crash(), but it is
information not printed.

It is specifically useful because nothing in the domain_crash() path
distinguishes PV and HVM guests, meaning that the output is ambiguous at
a glance when investigating customer logs.  VTx vs SVM is less ambiguous
at a glance because Intel vs AMD information is plentiful in dmesg, but
there's no harm making it clearer.

>> @@ -852,6 +867,15 @@ static inline int hvm_vmtrace_get_option(
>>      return -EOPNOTSUPP;
>>  }
>>  
>> +static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +}
>> +static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +}
> Were these meant to have hvm_ prefixes?

Oops yes.  I'm not entirely sure if the stubs are necessary, given our
usual DCE rule.  I'll try some !PV and !HVM builds and see whether I can
drop them entirely.

> With at least this last aspect addressed
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-19 17:00     ` Andrew Cooper
@ 2022-01-20  8:14       ` Jan Beulich
  2022-01-20 11:55         ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-01-20  8:14 UTC (permalink / raw)
  To: Andrew Cooper, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 19.01.2022 18:00, Andrew Cooper wrote:
> On 19/01/2022 13:42, Jan Beulich wrote:
>> On 17.01.2022 19:34, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/hvm/vmx/entry.S
>>> +++ b/xen/arch/x86/hvm/vmx/entry.S
>>> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>>>  
>>>          /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
>>> -        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
>>> +
>>> +        .macro restore_spec_ctrl
>>> +            mov    $MSR_SPEC_CTRL, %ecx
>>> +            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
>>> +            xor    %edx, %edx
>>> +            wrmsr
>>> +        .endm
>>> +        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>>>  
>>>          /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
>>> @@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
>>>          mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>>>  
>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>>> -        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
>>> -        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
>>> +        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
>>>          ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
>> I notice you did update this clobber remark, but what about the one further
>> up in context?
> 
> What about it?  It still clobbers %eax, %ecx and %edx.

Oh, sorry - I did look at DO_OVERWRITE_RSB only, not paying attention
to the now open-coded 2nd part, which - due to the blank line - doesn't
appear connected to the comment anymore.

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead
  2022-01-19 16:53     ` Andrew Cooper
@ 2022-01-20  8:20       ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-01-20  8:20 UTC (permalink / raw)
  To: Andrew Cooper, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 19.01.2022 17:53, Andrew Cooper wrote:
> On 19/01/2022 13:50, Jan Beulich wrote:
>> On 17.01.2022 20:25, Andrew Cooper wrote:
>>> @@ -323,10 +324,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>>          break;
>>>  
>>>      case MSR_IA32_BNDCFGS:
>>> -        if ( !cp->feat.mpx || !is_hvm_domain(d) ||
>>> -             !hvm_get_guest_bndcfgs(v, val) )
>>> +        if ( !cp->feat.mpx ) /* Implies Intel HVM only */
>> Wouldn't it make sense to accompany this comment by ...
>>
>>>              goto gp_fault;
>>> -        break;
>>     ASSERT(is_hvm_domain(d));
>>
>> (and then the same on the "set" path)?
> 
> So this is the reason for the default logic in the {get,set}_reg()
> path.  The absence of MSR_BNDCFGS in the PV and SVM paths will cause the
> VM to be crashed cleanly.  If you're on a VMX on a non-MPX capable
> system, the VMREAD/VMWRITE will hit a BUG (which in due course I want to
> downgrade to a domain crash).
> 
> It's a bit more friendly than an ASSERT() (doesn't take the system
> down), is present in release builds too, and more precise as it excludes
> SVM too.

I see, makes sense.

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-20  8:14       ` Jan Beulich
@ 2022-01-20 11:55         ` Andrew Cooper
  2022-01-20 12:11           ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2022-01-20 11:55 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 20/01/2022 08:14, Jan Beulich wrote:
> On 19.01.2022 18:00, Andrew Cooper wrote:
>> On 19/01/2022 13:42, Jan Beulich wrote:
>>> On 17.01.2022 19:34, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/hvm/vmx/entry.S
>>>> +++ b/xen/arch/x86/hvm/vmx/entry.S
>>>> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>>>>  
>>>>          /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>>>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
>>>> -        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
>>>> +
>>>> +        .macro restore_spec_ctrl
>>>> +            mov    $MSR_SPEC_CTRL, %ecx
>>>> +            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
>>>> +            xor    %edx, %edx
>>>> +            wrmsr
>>>> +        .endm
>>>> +        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>>>>  
>>>>          /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
>>>> @@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
>>>>          mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>>>>  
>>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>>>> -        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
>>>> -        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
>>>> +        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
>>>>          ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
>>> I notice you did update this clobber remark, but what about the one further
>>> up in context?
>> What about it?  It still clobbers %eax, %ecx and %edx.
> Oh, sorry - I did look at DO_OVERWRITE_RSB only, not paying attention
> to the now open-coded 2nd part, which - due to the blank line - doesn't
> appear connected to the comment anymore.

Yeah - it's a little harder now that it isn't a single line.  The
req/clob information really is most useful at the start of the block,
because that's where it is important to get the context correct.

Can I take this as an R-by then?

~Andrew


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-20 11:55         ` Andrew Cooper
@ 2022-01-20 12:11           ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-01-20 12:11 UTC (permalink / raw)
  To: Andrew Cooper, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 20.01.2022 12:55, Andrew Cooper wrote:
> On 20/01/2022 08:14, Jan Beulich wrote:
>> On 19.01.2022 18:00, Andrew Cooper wrote:
>>> On 19/01/2022 13:42, Jan Beulich wrote:
>>>> On 17.01.2022 19:34, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/hvm/vmx/entry.S
>>>>> +++ b/xen/arch/x86/hvm/vmx/entry.S
>>>>> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>>>>>  
>>>>>          /* SPEC_CTRL_ENTRY_FROM_VMX    Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>>>>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
>>>>> -        ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
>>>>> +
>>>>> +        .macro restore_spec_ctrl
>>>>> +            mov    $MSR_SPEC_CTRL, %ecx
>>>>> +            movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
>>>>> +            xor    %edx, %edx
>>>>> +            wrmsr
>>>>> +        .endm
>>>>> +        ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
>>>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>>>>>  
>>>>>          /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
>>>>> @@ -82,8 +89,7 @@ UNLIKELY_END(realmode)
>>>>>          mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>>>>>  
>>>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>>>>> -        /* SPEC_CTRL_EXIT_TO_VMX   Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
>>>>> -        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
>>>>> +        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
>>>>>          ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM
>>>> I notice you did update this clobber remark, but what about the one further
>>>> up in context?
>>> What about it?  It still clobbers %eax, %ecx and %edx.
>> Oh, sorry - I did look at DO_OVERWRITE_RSB only, not paying attention
>> to the now open-coded 2nd part, which - due to the blank line - doesn't
>> appear connected to the comment anymore.
> 
> Yeah - it's a little harder now that it isn't a single line.  The
> req/clob information really is most useful at the start of the block,
> because that's where it is important to get the context correct.
> 
> Can I take this as an R-by then?

Please do; I should have said so earlier on.

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
  2022-01-17 18:34 ` [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Andrew Cooper
  2022-01-19 13:42   ` Jan Beulich
@ 2022-01-21  7:56   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-01-21  7:56 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, Xen-devel

On 17.01.2022 19:34, Andrew Cooper wrote:
> The logic was based on a mistaken understanding of how NMI blocking on vmexit
> works.  NMIs are only blocked for EXIT_REASON_NMI, and not for general exits.
> Therefore, an NMI can in general hit early in the vmx_asm_vmexit_handler path,
> and the guest's value will be clobbered before it is saved.
> 
> Switch to using MSR load/save lists.  This causes the guest value to be saved
> atomically with respect to NMIs/MCEs/etc.
> 
> First, update vmx_cpuid_policy_changed() to configure the load/save lists at
> the same time as configuring the intercepts.  This function is always used in
> remote context, so extend the vmx_vmcs_{enter,exit}() block to cover the whole
> function, rather than having multiple remote acquisitions of the same VMCS.
> 
> Both of vmx_{add,del}_guest_msr() can fail.  The -ESRCH delete case is fine,
> but all others are fatal to the running of the VM, so handle them using
> domain_crash() - this path is only used during domain construction anyway.
> 
> Second, update vmx_{get,set}_reg() to use the MSR load/save lists rather than
> vcpu_msrs, and update the vcpu_msrs comment to describe the new state
> location.
> 
> Finally, adjust the entry/exit asm.
> 
> Because the guest value is saved and loaded atomically, we do not need to
> manually load the guest value, nor do we need to enable SCF_use_shadow.  This
> lets us remove the use of DO_SPEC_CTRL_EXIT_TO_GUEST.  Additionally,
> SPEC_CTRL_ENTRY_FROM_PV gets removed too, because on an early entry failure,
> we're no longer in the guest MSR_SPEC_CTRL context needing to switch back to
> Xen's context.
> 
> The only action remaining is to load Xen's MSR_SPEC_CTRL value on vmexit.  We
> could in principle use the host msr list, but is expected to complicated
> future work.  Delete DO_SPEC_CTRL_ENTRY_FROM_HVM entirely, and use a shorter
> code sequence to simply reload Xen's setting from the top-of-stack block.
> 
> Adjust the comment at the top of spec_ctrl_asm.h in light of this bugfix.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> 
> Needs backporting as far as people can tolerate.

Besides the earlier patches in this series, are there any other prereqs
that you're aware of and which aren't there yet in the stable trees?

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-01-21  7:56 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-17 18:34 [PATCH v2 0/4] x86/spec-ctrl: Fix NMI race condition Andrew Cooper
2022-01-17 18:34 ` [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Andrew Cooper
2022-01-19 13:28   ` Jan Beulich
2022-01-19 18:00     ` Andrew Cooper
2022-01-17 18:34 ` [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling Andrew Cooper
2022-01-19 13:30   ` Jan Beulich
2022-01-17 18:34 ` [PATCH v2 3/4] x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM Andrew Cooper
2022-01-17 18:34 ` [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Andrew Cooper
2022-01-19 13:42   ` Jan Beulich
2022-01-19 17:00     ` Andrew Cooper
2022-01-20  8:14       ` Jan Beulich
2022-01-20 11:55         ` Andrew Cooper
2022-01-20 12:11           ` Jan Beulich
2022-01-21  7:56   ` Jan Beulich
2022-01-17 19:25 ` [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead Andrew Cooper
2022-01-19 13:50   ` Jan Beulich
2022-01-19 16:53     ` Andrew Cooper
2022-01-20  8:20       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.