All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263)
@ 2018-06-12 11:36 Julien Grall
  2018-06-12 11:36 ` [PATCH v3 01/13] xen/arm: domain: Zero the per-vCPU cpu_info Julien Grall
                   ` (13 more replies)
  0 siblings, 14 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

Hi all,

This patch series implement the Xen hypervisor side of the "Spectre-v4"
(CVE-2018-3639) mitigation known as "Speculative Store Bypass Disable"
(SSBD).

More information can be found at:
  https://bugs.chromium.org/p/project-zero/issues/detail?id=1528
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability

For all released Arm Cortex-A that are affected by this issue, then the
preferred mitigation is simply to set a chicken bit in the firmware during
CPU initialization and therefore no change to Xen is required. Other CPUs
may require the chicken bit to be toggled dynamically (for example, when
switching between kernel-mode and hypervisor-mode) and this is achieve by
calling into EL3 via an SMC which has been published as part of the latest
SMCCC specification:
  https://developer.arm.com/cache-speculation-vulnerability-firmware-specification

as well as an ATF update for the released ARM cores affected by SSBD:
  https://github.com/ARM-software/arm-trusted-firmware/pull/1392

These patches provide the following:
  1. Safe probing of firmware to establish which CPUs in the system
     require calling into EL3 as part of the mitigation
  2. A command-line option to force SSBD mitigation to be always on,
     always off, or dynamically toggled (default) for CPUs that require
     the EL3 call.
  3. An initial implementation of the call via Xen, which exposes the
     mitigation to the guest via an HVC interface.

This patch also provides bug fix and new infrastructure require to implement
the mitigation:
  1. Zeroed each vCPU stack
  2. Provide generic assembly macros
  3. Provide alternative callback (RFC)

A branch can be found with all the patches at:
  https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
  branch ssbd/v3

Cheers,


Julien Grall (13):
  xen/arm: domain: Zero the per-vCPU cpu_info
  xen/arm64: entry: Use named label in guest_sync
  xen/arm: setup: Check errata for boot CPU later on
  xen/arm: Add ARCH_WORKAROUND_2 probing
  xen/arm: Add command line option to control SSBD mitigation
  xen/arm: Add ARCH_WORKAROUND_2 support for guests
  xen/arm: Simplify alternative patching of non-writable region
  xen/arm: alternatives: Add dynamic patching feature
  xen/arm64: Add generic assembly macros
  xen/arm64: Implement a fast path for handling SMCCC_ARCH_WORKAROUND_2
  xen/arm: Kconfig: Move HARDEN_BRANCH_PREDICTOR under "Architecture
    features"
  xen/arm: smccc: Fix indentation in ARM_SMCCC_ARCH_WORKAROUND_1_FID
  xen/arm: Avoid to use current everywhere in enter_hypervisor_head

 docs/misc/xen-command-line.markdown |  18 +++++
 xen/arch/arm/Kconfig                |  44 +++++++----
 xen/arch/arm/alternative.c          |  86 +++++++++++----------
 xen/arch/arm/arm64/asm-offsets.c    |   2 +
 xen/arch/arm/arm64/entry.S          |  48 +++++++++++-
 xen/arch/arm/cpuerrata.c            | 150 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/domain.c               |   9 +++
 xen/arch/arm/setup.c                |   8 +-
 xen/arch/arm/traps.c                |  32 ++++++--
 xen/arch/arm/vsmc.c                 |  37 +++++++++
 xen/include/asm-arm/alternative.h   |  44 +++++++++--
 xen/include/asm-arm/arm64/macros.h  |  25 ++++++
 xen/include/asm-arm/cpuerrata.h     |  42 ++++++++++
 xen/include/asm-arm/cpufeature.h    |   3 +-
 xen/include/asm-arm/current.h       |   6 +-
 xen/include/asm-arm/macros.h        |   2 +-
 xen/include/asm-arm/smccc.h         |  13 +++-
 17 files changed, 491 insertions(+), 78 deletions(-)
 create mode 100644 xen/include/asm-arm/arm64/macros.h

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 01/13] xen/arm: domain: Zero the per-vCPU cpu_info
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 02/13] xen/arm64: entry: Use named label in guest_sync Julien Grall
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

A stack is allocated per vCPU to be used by Xen. The allocation is done
with alloc_xenheap_pages that does not zero the memory returned. However
the top of the stack is containing information that will be used to
store the initial state of the vCPU (see struct cpu_info). Some of the
fields may not be initialized and will lead to use/leak bits of previous
memory in some cases on the first run of vCPU (AFAICT this only happen on
vCPU0 for Dom0).

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Add stefano's reviewed-by

    Changes in v2:
        - Zero only cpu_info
---
 xen/arch/arm/domain.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ec0f042bf7..5a2a9a6b83 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -550,6 +550,7 @@ int vcpu_initialise(struct vcpu *v)
     v->arch.cpu_info = (struct cpu_info *)(v->arch.stack
                                            + STACK_SIZE
                                            - sizeof(struct cpu_info));
+    memset(v->arch.cpu_info, 0, sizeof(*v->arch.cpu_info));
 
     memset(&v->arch.saved_context, 0, sizeof(v->arch.saved_context));
     v->arch.saved_context.sp = (register_t)v->arch.cpu_info;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 02/13] xen/arm64: entry: Use named label in guest_sync
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
  2018-06-12 11:36 ` [PATCH v3 01/13] xen/arm: domain: Zero the per-vCPU cpu_info Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 03/13] xen/arm: setup: Check errata for boot CPU later on Julien Grall
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

This will improve readability for future changes.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/arm64/entry.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index ffa9a1c492..e2344e565f 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -226,11 +226,11 @@ guest_sync:
         mrs     x1, esr_el2
         lsr     x1, x1, #HSR_EC_SHIFT           /* x1 = ESR_EL2.EC */
         cmp     x1, #HSR_EC_HVC64
-        b.ne    1f                              /* Not a HVC skip fastpath. */
+        b.ne    guest_sync_slowpath             /* Not a HVC skip fastpath. */
 
         mrs     x1, esr_el2
         and     x1, x1, #0xffff                 /* Check the immediate [0:16] */
-        cbnz    x1, 1f                          /* should be 0 for HVC #0 */
+        cbnz    x1, guest_sync_slowpath         /* should be 0 for HVC #0 */
 
         /*
          * Fastest path possible for ARM_SMCCC_ARCH_WORKAROUND_1.
@@ -241,7 +241,7 @@ guest_sync:
          * be encoded as an immediate for cmp.
          */
         eor     w0, w0, #ARM_SMCCC_ARCH_WORKAROUND_1_FID
-        cbnz    w0, 1f
+        cbnz    w0, guest_sync_slowpath
 
         /*
          * Clobber both x0 and x1 to prevent leakage. Note that thanks
@@ -250,7 +250,7 @@ guest_sync:
         mov     x1, xzr
         eret
 
-1:
+guest_sync_slowpath:
         /*
          * x0/x1 may have been scratch by the fast path above, so avoid
          * to save them.
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 03/13] xen/arm: setup: Check errata for boot CPU later on
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
  2018-06-12 11:36 ` [PATCH v3 01/13] xen/arm: domain: Zero the per-vCPU cpu_info Julien Grall
  2018-06-12 11:36 ` [PATCH v3 02/13] xen/arm64: entry: Use named label in guest_sync Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing Julien Grall
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

Some errata will rely on the SMCCC version which is detected by
psci_init().

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/setup.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1d6f6bf37e..ac93de4786 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -171,8 +171,6 @@ static void __init processor_id(void)
     }
 
     processor_setup();
-
-    check_local_cpu_errata();
 }
 
 void dt_unreserved_regions(paddr_t s, paddr_t e,
@@ -779,6 +777,12 @@ void __init start_xen(unsigned long boot_phys_offset,
     printk(XENLOG_INFO "SMP: Allowing %u CPUs\n", cpus);
     nr_cpu_ids = cpus;
 
+    /*
+     * Some errata relies on SMCCC version which is detected by psci_init()
+     * (called from smp_init_cpus()).
+     */
+    check_local_cpu_errata();
+
     init_xen_time();
 
     gic_init();
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (2 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 03/13] xen/arm: setup: Check errata for boot CPU later on Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 16:23   ` Stefano Stabellini
  2018-06-12 11:36 ` [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation Julien Grall
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery
mechanism for detecting the SSBD mitigation.

A new capability is also allocated for that purpose, and a config
option.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v3:
        - required should be false when then mitigation is not required
        on a given CPU

    Changes in v2:
        - Add the switch in this patch rather than the next one.
        - s/supported/required/
---
 xen/arch/arm/Kconfig             | 10 +++++++
 xen/arch/arm/cpuerrata.c         | 58 ++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpuerrata.h  | 21 +++++++++++++++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 xen/include/asm-arm/smccc.h      |  7 +++++
 5 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 8174c0c635..0e2d027060 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -73,6 +73,16 @@ config SBSA_VUART_CONSOLE
 	  Allows a guest to use SBSA Generic UART as a console. The
 	  SBSA Generic UART implements a subset of ARM PL011 UART.
 
+config ARM_SSBD
+	bool "Speculative Store Bypass Disable" if EXPERT = "y"
+	depends on HAS_ALTERNATIVE
+	default y
+	help
+	  This enables mitigation of bypassing of previous stores by speculative
+	  loads.
+
+	  If unsure, say Y.
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 1baa20654b..1a6130406c 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -235,6 +235,58 @@ static int enable_ic_inv_hardening(void *data)
 
 #endif
 
+#ifdef CONFIG_ARM_SSBD
+
+/*
+ * Assembly code may use the variable directly, so we need to make sure
+ * it fits in a register.
+ */
+DEFINE_PER_CPU_READ_MOSTLY(register_t, ssbd_callback_required);
+
+static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
+{
+    struct arm_smccc_res res;
+    bool required;
+
+    if ( smccc_ver < SMCCC_VERSION(1, 1) )
+        return false;
+
+    /*
+     * The probe function return value is either negative (unsupported
+     * or mitigated), positive (unaffected), or zero (requires
+     * mitigation). We only need to do anything in the last case.
+     */
+    arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FID,
+                      ARM_SMCCC_ARCH_WORKAROUND_2_FID, &res);
+
+    switch ( (int)res.a0 )
+    {
+    case ARM_SMCCC_NOT_SUPPORTED:
+        return false;
+
+    case ARM_SMCCC_NOT_REQUIRED:
+        return false;
+
+    case ARM_SMCCC_SUCCESS:
+        required = true;
+        break;
+
+    case 1: /* Mitigation not required on this CPU. */
+        required = false;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return false;
+    }
+
+    if ( required )
+        this_cpu(ssbd_callback_required) = 1;
+
+    return required;
+}
+#endif
+
 #define MIDR_RANGE(model, min, max)     \
     .matches = is_affected_midr_range,  \
     .midr_model = model,                \
@@ -336,6 +388,12 @@ static const struct arm_cpu_capabilities arm_errata[] = {
         .enable = enable_ic_inv_hardening,
     },
 #endif
+#ifdef CONFIG_ARM_SSBD
+    {
+        .capability = ARM_SSBD,
+        .matches = has_ssbd_mitigation,
+    },
+#endif
     {},
 };
 
diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
index 4e45b237c8..e628d3ff56 100644
--- a/xen/include/asm-arm/cpuerrata.h
+++ b/xen/include/asm-arm/cpuerrata.h
@@ -27,9 +27,30 @@ static inline bool check_workaround_##erratum(void)             \
 
 CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32)
 CHECK_WORKAROUND_HELPER(834220, ARM64_WORKAROUND_834220, CONFIG_ARM_64)
+CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
 
 #undef CHECK_WORKAROUND_HELPER
 
+#ifdef CONFIG_ARM_SSBD
+
+#include <asm/current.h>
+
+DECLARE_PER_CPU(register_t, ssbd_callback_required);
+
+static inline bool cpu_require_ssbd_mitigation(void)
+{
+    return this_cpu(ssbd_callback_required);
+}
+
+#else
+
+static inline bool cpu_require_ssbd_mitigation(void)
+{
+    return false;
+}
+
+#endif
+
 #endif /* __ARM_CPUERRATA_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index e557a095af..2a5c075d3b 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -43,8 +43,9 @@
 #define SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT 5
 #define SKIP_CTXT_SWITCH_SERROR_SYNC 6
 #define ARM_HARDEN_BRANCH_PREDICTOR 7
+#define ARM_SSBD 8
 
-#define ARM_NCAPS           8
+#define ARM_NCAPS           9
 
 #ifndef __ASSEMBLY__
 
diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
index 8342cc33fe..a6804cec99 100644
--- a/xen/include/asm-arm/smccc.h
+++ b/xen/include/asm-arm/smccc.h
@@ -258,7 +258,14 @@ struct arm_smccc_res {
                       ARM_SMCCC_OWNER_ARCH,         \
                       0x8000)
 
+#define ARM_SMCCC_ARCH_WORKAROUND_2_FID             \
+    ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,         \
+                       ARM_SMCCC_CONV_32,           \
+                       ARM_SMCCC_OWNER_ARCH,        \
+                       0x7FFF)
+
 /* SMCCC error codes */
+#define ARM_SMCCC_NOT_REQUIRED          (-2)
 #define ARM_SMCCC_ERR_UNKNOWN_FUNCTION  (-1)
 #define ARM_SMCCC_NOT_SUPPORTED         (-1)
 #define ARM_SMCCC_SUCCESS               (0)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (3 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 16:24   ` Stefano Stabellini
  2018-06-12 11:36 ` [PATCH v3 06/13] xen/arm: Add ARCH_WORKAROUND_2 support for guests Julien Grall
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

On a system where the firmware implements ARCH_WORKAROUND_2, it may be
useful to either permanently enable or disable the workaround for cases
where the user decides that they'd rather not get a trap overhead, and
keep the mitigation permanently on or off instead of switching it on
exception entry/exit. In any case, default to mitigation being enabled.

The new command line option is implemented as list of one option to
follow x86 option and also allow to extend it more easily in the future.

Note that for convenience, the full implemention of the workaround is
done in the .matches callback.

Lastly, a accessor is provided to know the state of the mitigation.

After this patch, there are 3 methods complementing each other to find the
state of the mitigation:
    - The capability ARM_SSBD indicates the platform is affected by the
      vulnerability. This will also return false if the user decide to force
      disabled the mitigation (spec-ctrl="ssbd=force-disable"). The
      capability is useful for putting shortcut in place using alternative.
    - ssbd_state indicates the global state of the mitigation (e.g
      unknown, force enable...). The global state is required to report
      the state to a guest.
    - The per-cpu ssbd_callback_required indicates whether a pCPU
      requires to call the SMC. This allows to shortcut SMC call
      and save an entry/exit to EL3.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v3:
        - Fix typo in the name

    Changes in v2:
        - Move out some code to the previous patch.
        - Update the commit message with more background
---
 docs/misc/xen-command-line.markdown | 18 ++++++++
 xen/arch/arm/cpuerrata.c            | 88 ++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/cpuerrata.h     | 21 +++++++++
 3 files changed, 120 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 8712a833a2..962028b6ed 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1756,6 +1756,24 @@ enforces the maximum theoretically necessary timeout of 670ms. Any number
 is being interpreted as a custom timeout in milliseconds. Zero or boolean
 false disable the quirk workaround, which is also the default.
 
+### spec-ctrl (Arm)
+> `= List of [ ssbd=force-disable|runtime|force-enable ]`
+
+Controls for speculative execution sidechannel mitigations.
+
+The option `ssbd=` is used to control the state of Speculative Store
+Bypass Disable (SSBD) mitigation.
+
+* `ssbd=force-disable` will keep the mitigation permanently off. The guest
+will not be able to control the state of the mitigation.
+* `ssbd=runtime` will always turn on the mitigation when running in the
+hypervisor context. The guest will be to turn on/off the mitigation for
+itself by using the firmware interface ARCH\_WORKAROUND\_2.
+* `ssbd=force-enable` will keep the mitigation permanently on. The guest will
+not be able to control the state of the mitigation.
+
+By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
+
 ### spec-ctrl (x86)
 > `= List of [ <bool>, xen=<bool>, {pv,hvm,msr-sc,rsb}=<bool>,
 >              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd}=<bool> ]`
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 1a6130406c..4292008692 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -237,6 +237,41 @@ static int enable_ic_inv_hardening(void *data)
 
 #ifdef CONFIG_ARM_SSBD
 
+enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
+
+static int __init parse_spec_ctrl(const char *s)
+{
+    const char *ss;
+    int rc = 0;
+
+    do {
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        if ( !strncmp(s, "ssbd=", 5) )
+        {
+            s += 5;
+
+            if ( !strncmp(s, "force-disable", ss - s) )
+                ssbd_state = ARM_SSBD_FORCE_DISABLE;
+            else if ( !strncmp(s, "runtime", ss - s) )
+                ssbd_state = ARM_SSBD_RUNTIME;
+            else if ( !strncmp(s, "force-enable", ss - s) )
+                ssbd_state = ARM_SSBD_FORCE_ENABLE;
+            else
+                rc = -EINVAL;
+        }
+        else
+            rc = -EINVAL;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("spec-ctrl", parse_spec_ctrl);
+
 /*
  * Assembly code may use the variable directly, so we need to make sure
  * it fits in a register.
@@ -251,20 +286,17 @@ static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
     if ( smccc_ver < SMCCC_VERSION(1, 1) )
         return false;
 
-    /*
-     * The probe function return value is either negative (unsupported
-     * or mitigated), positive (unaffected), or zero (requires
-     * mitigation). We only need to do anything in the last case.
-     */
     arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FID,
                       ARM_SMCCC_ARCH_WORKAROUND_2_FID, &res);
 
     switch ( (int)res.a0 )
     {
     case ARM_SMCCC_NOT_SUPPORTED:
+        ssbd_state = ARM_SSBD_UNKNOWN;
         return false;
 
     case ARM_SMCCC_NOT_REQUIRED:
+        ssbd_state = ARM_SSBD_MITIGATED;
         return false;
 
     case ARM_SMCCC_SUCCESS:
@@ -280,8 +312,49 @@ static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
         return false;
     }
 
-    if ( required )
-        this_cpu(ssbd_callback_required) = 1;
+    switch ( ssbd_state )
+    {
+    case ARM_SSBD_FORCE_DISABLE:
+    {
+        static bool once = true;
+
+        if ( once )
+            printk("%s disabled from command-line\n", entry->desc);
+        once = false;
+
+        arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL);
+        required = false;
+
+        break;
+    }
+
+    case ARM_SSBD_RUNTIME:
+        if ( required )
+        {
+            this_cpu(ssbd_callback_required) = 1;
+            arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
+        }
+
+        break;
+
+    case ARM_SSBD_FORCE_ENABLE:
+    {
+        static bool once = true;
+
+        if ( once )
+            printk("%s forced from command-line\n", entry->desc);
+        once = false;
+
+        arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
+        required = true;
+
+        break;
+    }
+
+    default:
+        ASSERT_UNREACHABLE();
+        return false;
+    }
 
     return required;
 }
@@ -390,6 +463,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
 #endif
 #ifdef CONFIG_ARM_SSBD
     {
+        .desc = "Speculative Store Bypass Disabled",
         .capability = ARM_SSBD,
         .matches = has_ssbd_mitigation,
     },
diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
index e628d3ff56..55ddfda272 100644
--- a/xen/include/asm-arm/cpuerrata.h
+++ b/xen/include/asm-arm/cpuerrata.h
@@ -31,10 +31,26 @@ CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
 
 #undef CHECK_WORKAROUND_HELPER
 
+enum ssbd_state
+{
+    ARM_SSBD_UNKNOWN,
+    ARM_SSBD_FORCE_DISABLE,
+    ARM_SSBD_RUNTIME,
+    ARM_SSBD_FORCE_ENABLE,
+    ARM_SSBD_MITIGATED,
+};
+
 #ifdef CONFIG_ARM_SSBD
 
 #include <asm/current.h>
 
+extern enum ssbd_state ssbd_state;
+
+static inline enum ssbd_state get_ssbd_state(void)
+{
+    return ssbd_state;
+}
+
 DECLARE_PER_CPU(register_t, ssbd_callback_required);
 
 static inline bool cpu_require_ssbd_mitigation(void)
@@ -49,6 +65,11 @@ static inline bool cpu_require_ssbd_mitigation(void)
     return false;
 }
 
+static inline enum ssbd_state get_ssbd_state(void)
+{
+    return ARM_SSBD_UNKNOWN;
+}
+
 #endif
 
 #endif /* __ARM_CPUERRATA_H__ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 06/13] xen/arm: Add ARCH_WORKAROUND_2 support for guests
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (4 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region Julien Grall
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

In order to offer ARCH_WORKAROUND_2 support to guests, we need to track the
state of the workaround per-vCPU. The field 'pad' in cpu_info is now
repurposed to store flags easily accessible in assembly.

As the hypervisor will always run with the workaround enabled, we may
need to enable (on guest exit) or disable (on guest entry) the
workaround.

A follow-up patch will add fastpath for the workaround for arm64 guests.

Note that check_workaround_ssbd() is used instead of ssbd_get_state()
because the former is implemented using an alternative. Thefore the code
will be shortcut on affected platform.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Add Stefano's reviewed-by

    Changes in v2:
        - Fix the condition in need_ssbd_flip()
---
 xen/arch/arm/domain.c         |  8 ++++++++
 xen/arch/arm/traps.c          | 20 ++++++++++++++++++++
 xen/arch/arm/vsmc.c           | 37 +++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/current.h |  6 +++++-
 4 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5a2a9a6b83..4baecc2447 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -21,6 +21,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
 #include <asm/event.h>
@@ -572,6 +573,13 @@ int vcpu_initialise(struct vcpu *v)
     if ( (rc = vcpu_vtimer_init(v)) != 0 )
         goto fail;
 
+    /*
+     * The workaround 2 (i.e SSBD mitigation) is enabled by default if
+     * supported.
+     */
+    if ( get_ssbd_state() == ARM_SSBD_RUNTIME )
+        v->arch.cpu_info->flags |= CPUINFO_WORKAROUND_2_FLAG;
+
     return rc;
 
 fail:
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 5c18e918b0..315fc61f77 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2011,10 +2011,23 @@ inject_abt:
         inject_iabt_exception(regs, gva, hsr.len);
 }
 
+static inline bool needs_ssbd_flip(struct vcpu *v)
+{
+    if ( !check_workaround_ssbd() )
+        return false;
+
+    return !(v->arch.cpu_info->flags & CPUINFO_WORKAROUND_2_FLAG) &&
+             cpu_require_ssbd_mitigation();
+}
+
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
 {
     if ( guest_mode(regs) )
     {
+        /* If the guest has disabled the workaround, bring it back on. */
+        if ( needs_ssbd_flip(current) )
+            arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
+
         /*
          * If we pended a virtual abort, preserve it until it gets cleared.
          * See ARM ARM DDI 0487A.j D1.14.3 (Virtual Interrupts) for details,
@@ -2260,6 +2273,13 @@ void leave_hypervisor_tail(void)
              */
             SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT);
 
+            /*
+             * The hypervisor runs with the workaround always present.
+             * If the guest wants it disabled, so be it...
+             */
+            if ( needs_ssbd_flip(current) )
+                arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL);
+
             return;
         }
         local_irq_enable();
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 40a80d5760..c4ccae6030 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -18,6 +18,7 @@
 #include <xen/lib.h>
 #include <xen/types.h>
 #include <public/arch-arm/smccc.h>
+#include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/monitor.h>
 #include <asm/regs.h>
@@ -104,6 +105,23 @@ static bool handle_arch(struct cpu_user_regs *regs)
             if ( cpus_have_cap(ARM_HARDEN_BRANCH_PREDICTOR) )
                 ret = 0;
             break;
+        case ARM_SMCCC_ARCH_WORKAROUND_2_FID:
+            switch ( get_ssbd_state() )
+            {
+            case ARM_SSBD_UNKNOWN:
+            case ARM_SSBD_FORCE_DISABLE:
+                break;
+
+            case ARM_SSBD_RUNTIME:
+                ret = ARM_SMCCC_SUCCESS;
+                break;
+
+            case ARM_SSBD_FORCE_ENABLE:
+            case ARM_SSBD_MITIGATED:
+                ret = ARM_SMCCC_NOT_REQUIRED;
+                break;
+            }
+            break;
         }
 
         set_user_reg(regs, 0, ret);
@@ -114,6 +132,25 @@ static bool handle_arch(struct cpu_user_regs *regs)
     case ARM_SMCCC_ARCH_WORKAROUND_1_FID:
         /* No return value */
         return true;
+
+    case ARM_SMCCC_ARCH_WORKAROUND_2_FID:
+    {
+        bool enable = (uint32_t)get_user_reg(regs, 1);
+
+        /*
+         * ARM_WORKAROUND_2_FID should only be called when mitigation
+         * state can be changed at runtime.
+         */
+        if ( unlikely(get_ssbd_state() != ARM_SSBD_RUNTIME) )
+            return true;
+
+        if ( enable )
+            get_cpu_info()->flags |= CPUINFO_WORKAROUND_2_FLAG;
+        else
+            get_cpu_info()->flags &= ~CPUINFO_WORKAROUND_2_FLAG;
+
+        return true;
+    }
     }
 
     return false;
diff --git a/xen/include/asm-arm/current.h b/xen/include/asm-arm/current.h
index 7a0971fdea..f9819b34fc 100644
--- a/xen/include/asm-arm/current.h
+++ b/xen/include/asm-arm/current.h
@@ -7,6 +7,10 @@
 #include <asm/percpu.h>
 #include <asm/processor.h>
 
+/* Tell whether the guest vCPU enabled Workaround 2 (i.e variant 4) */
+#define CPUINFO_WORKAROUND_2_FLAG_SHIFT   0
+#define CPUINFO_WORKAROUND_2_FLAG (_AC(1, U) << CPUINFO_WORKAROUND_2_FLAG_SHIFT)
+
 #ifndef __ASSEMBLY__
 
 struct vcpu;
@@ -21,7 +25,7 @@ DECLARE_PER_CPU(struct vcpu *, curr_vcpu);
 struct cpu_info {
     struct cpu_user_regs guest_cpu_user_regs;
     unsigned long elr;
-    unsigned int pad;
+    uint32_t flags;
 };
 
 static inline struct cpu_info *get_cpu_info(void)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (5 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 06/13] xen/arm: Add ARCH_WORKAROUND_2 support for guests Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 21:17   ` Konrad Rzeszutek Wilk
  2018-06-12 11:36 ` [PATCH v3 08/13] xen/arm: alternatives: Add dynamic patching feature Julien Grall
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

During the MMU setup process, Xen will set SCTLR_EL2.WNX
(Write-Non-eXecutable) bit. Because of that, the alternative code need
to re-mapped the region in a difference place in order to modify the
text section.

At the moment, the function patching the code is only aware of the
re-mapped region. This requires the caller to mess with Xen internal in
order to have function such as is_active_kernel_text() working.

All the interactions with Xen internal can be removed by specifying the
offset between the region patch and the writable region for updating the
instruction

This simplification will also make it easier to integrate dynamic patching
in a follow-up patch. Indeed, the callback address should be in
an original region and not re-mapped only which is writeable non-executable.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

    Changes in v3:
        - Add stefano's reviewed-by

    Changes in v2:
        - Add commit message
        - Remove comment in the code that does not make sense anymore
---
 xen/arch/arm/alternative.c | 42 +++++++++++++-----------------------------
 1 file changed, 13 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index 9ffdc475d6..936cf04956 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -97,12 +97,16 @@ static u32 get_alt_insn(const struct alt_instr *alt,
 /*
  * The region patched should be read-write to allow __apply_alternatives
  * to replacing the instructions when necessary.
+ *
+ * @update_offset: Offset between the region patched and the writable
+ * region for the update. 0 if the patched region is writable.
  */
-static int __apply_alternatives(const struct alt_region *region)
+static int __apply_alternatives(const struct alt_region *region,
+                                paddr_t update_offset)
 {
     const struct alt_instr *alt;
-    const u32 *replptr;
-    u32 *origptr;
+    const u32 *replptr, *origptr;
+    u32 *updptr;
 
     printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n",
            region->begin, region->end);
@@ -118,6 +122,7 @@ static int __apply_alternatives(const struct alt_region *region)
         BUG_ON(alt->alt_len != alt->orig_len);
 
         origptr = ALT_ORIG_PTR(alt);
+        updptr = (void *)origptr + update_offset;
         replptr = ALT_REPL_PTR(alt);
 
         nr_inst = alt->alt_len / sizeof(insn);
@@ -125,7 +130,7 @@ static int __apply_alternatives(const struct alt_region *region)
         for ( i = 0; i < nr_inst; i++ )
         {
             insn = get_alt_insn(alt, origptr + i, replptr + i);
-            *(origptr + i) = cpu_to_le32(insn);
+            *(updptr + i) = cpu_to_le32(insn);
         }
 
         /* Ensure the new instructions reached the memory and nuke */
@@ -162,9 +167,6 @@ static int __apply_alternatives_multi_stop(void *unused)
         paddr_t xen_size = _end - _start;
         unsigned int xen_order = get_order_from_bytes(xen_size);
         void *xenmap;
-        struct virtual_region patch_region = {
-            .list = LIST_HEAD_INIT(patch_region.list),
-        };
 
         BUG_ON(patched);
 
@@ -177,31 +179,13 @@ static int __apply_alternatives_multi_stop(void *unused)
         /* Re-mapping Xen is not expected to fail during boot. */
         BUG_ON(!xenmap);
 
-        /*
-         * If we generate a new branch instruction, the target will be
-         * calculated in this re-mapped Xen region. So we have to register
-         * this re-mapped Xen region as a virtual region temporarily.
-         */
-        patch_region.start = xenmap;
-        patch_region.end = xenmap + xen_size;
-        register_virtual_region(&patch_region);
+        region.begin = __alt_instructions;
+        region.end = __alt_instructions_end;
 
-        /*
-         * Find the virtual address of the alternative region in the new
-         * mapping.
-         * alt_instr contains relative offset, so the function
-         * __apply_alternatives will patch in the re-mapped version of
-         * Xen.
-         */
-        region.begin = (void *)__alt_instructions - (void *)_start + xenmap;
-        region.end = (void *)__alt_instructions_end - (void *)_start + xenmap;
-
-        ret = __apply_alternatives(&region);
+        ret = __apply_alternatives(&region, xenmap - (void *)_start);
         /* The patching is not expected to fail during boot. */
         BUG_ON(ret != 0);
 
-        unregister_virtual_region(&patch_region);
-
         vunmap(xenmap);
 
         /* Barriers provided by the cache flushing */
@@ -235,7 +219,7 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
         .end = end,
     };
 
-    return __apply_alternatives(&region);
+    return __apply_alternatives(&region, 0);
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 08/13] xen/arm: alternatives: Add dynamic patching feature
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (6 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 09/13] xen/arm64: Add generic assembly macros Julien Grall
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

This is based on the Linux commit dea5e2a4c5bc "arm64: alternatives: Add
dynamic patching feature" written by Marc Zyngier:

    We've so far relied on a patching infrastructure that only gave us
    a single alternative, without any way to provide a range of potential
    replacement instructions. For a single feature, this is an all or
    nothing thing.

    It would be interesting to have a more flexible grained way of patching the
    kernel though, where we could dynamically tune the code that gets injected.

    In order to achive this, let's introduce a new form of dynamic patching,
    assiciating a callback to a patching site. This callback gets source and
    target locations of the patching request, as well as the number of
    instructions to be patched.

    Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb
    directives:
                    asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
                                 : "r" (v));
    or

                    alternative_cb callback
                            mov x0, #0
                    alternative_cb_end

    where callback is the C function computing the alternative.

    Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Fix typo in the commit message
        - Add Stefano's acked-by
---
 xen/arch/arm/alternative.c        | 48 +++++++++++++++++++++++++++++----------
 xen/include/asm-arm/alternative.h | 44 +++++++++++++++++++++++++++++++----
 2 files changed, 75 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index 936cf04956..52ed7edf69 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -30,6 +30,8 @@
 #include <asm/byteorder.h>
 #include <asm/cpufeature.h>
 #include <asm/insn.h>
+/* XXX: Move ARCH_PATCH_INSN_SIZE out of livepatch.h */
+#include <asm/livepatch.h>
 #include <asm/page.h>
 
 /* Override macros from asm/page.h to make them work with mfn_t */
@@ -94,6 +96,23 @@ static u32 get_alt_insn(const struct alt_instr *alt,
     return insn;
 }
 
+static void patch_alternative(const struct alt_instr *alt,
+                              const uint32_t *origptr,
+                              uint32_t *updptr, int nr_inst)
+{
+    const uint32_t *replptr;
+    unsigned int i;
+
+    replptr = ALT_REPL_PTR(alt);
+    for ( i = 0; i < nr_inst; i++ )
+    {
+        uint32_t insn;
+
+        insn = get_alt_insn(alt, origptr + i, replptr + i);
+        updptr[i] = cpu_to_le32(insn);
+    }
+}
+
 /*
  * The region patched should be read-write to allow __apply_alternatives
  * to replacing the instructions when necessary.
@@ -105,33 +124,38 @@ static int __apply_alternatives(const struct alt_region *region,
                                 paddr_t update_offset)
 {
     const struct alt_instr *alt;
-    const u32 *replptr, *origptr;
+    const u32 *origptr;
     u32 *updptr;
+    alternative_cb_t alt_cb;
 
     printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n",
            region->begin, region->end);
 
     for ( alt = region->begin; alt < region->end; alt++ )
     {
-        u32 insn;
-        int i, nr_inst;
+        int nr_inst;
 
-        if ( !cpus_have_cap(alt->cpufeature) )
+        /* Use ARM_CB_PATCH as an unconditional patch */
+        if ( alt->cpufeature < ARM_CB_PATCH &&
+             !cpus_have_cap(alt->cpufeature) )
             continue;
 
-        BUG_ON(alt->alt_len != alt->orig_len);
+        if ( alt->cpufeature == ARM_CB_PATCH )
+            BUG_ON(alt->alt_len != 0);
+        else
+            BUG_ON(alt->alt_len != alt->orig_len);
 
         origptr = ALT_ORIG_PTR(alt);
         updptr = (void *)origptr + update_offset;
-        replptr = ALT_REPL_PTR(alt);
 
-        nr_inst = alt->alt_len / sizeof(insn);
+        nr_inst = alt->orig_len / ARCH_PATCH_INSN_SIZE;
 
-        for ( i = 0; i < nr_inst; i++ )
-        {
-            insn = get_alt_insn(alt, origptr + i, replptr + i);
-            *(updptr + i) = cpu_to_le32(insn);
-        }
+        if ( alt->cpufeature < ARM_CB_PATCH )
+            alt_cb = patch_alternative;
+        else
+            alt_cb = ALT_REPL_PTR(alt);
+
+        alt_cb(alt, origptr, updptr, nr_inst);
 
         /* Ensure the new instructions reached the memory and nuke */
         clean_and_invalidate_dcache_va_range(origptr,
diff --git a/xen/include/asm-arm/alternative.h b/xen/include/asm-arm/alternative.h
index 4e33d1cdf7..9b4b02811b 100644
--- a/xen/include/asm-arm/alternative.h
+++ b/xen/include/asm-arm/alternative.h
@@ -3,6 +3,8 @@
 
 #include <asm/cpufeature.h>
 
+#define ARM_CB_PATCH ARM_NCAPS
+
 #ifndef __ASSEMBLY__
 
 #include <xen/init.h>
@@ -18,16 +20,24 @@ struct alt_instr {
 };
 
 /* Xen: helpers used by common code. */
-#define __ALT_PTR(a,f)		((u32 *)((void *)&(a)->f + (a)->f))
+#define __ALT_PTR(a,f)		((void *)&(a)->f + (a)->f)
 #define ALT_ORIG_PTR(a)		__ALT_PTR(a, orig_offset)
 #define ALT_REPL_PTR(a)		__ALT_PTR(a, alt_offset)
 
+typedef void (*alternative_cb_t)(const struct alt_instr *alt,
+				 const uint32_t *origptr, uint32_t *updptr,
+				 int nr_inst);
+
 void __init apply_alternatives_all(void);
 int apply_alternatives(const struct alt_instr *start, const struct alt_instr *end);
 
-#define ALTINSTR_ENTRY(feature)						      \
+#define ALTINSTR_ENTRY(feature, cb)					      \
 	" .word 661b - .\n"				/* label           */ \
+	" .if " __stringify(cb) " == 0\n"				      \
 	" .word 663f - .\n"				/* new instruction */ \
+	" .else\n"							      \
+	" .word " __stringify(cb) "- .\n"		/* callback */	      \
+	" .endif\n"							      \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
 	" .byte 662b-661b\n"				/* source len      */ \
 	" .byte 664f-663f\n"				/* replacement len */
@@ -45,15 +55,18 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
  * but most assemblers die if insn1 or insn2 have a .inst. This should
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
+ *
+ * Alternatives with callbacks do not generate replacement instructions.
  */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb)	\
 	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
 	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
+	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
+	" .if " __stringify(cb) " == 0\n"				\
 	".pushsection .altinstr_replacement, \"a\"\n"			\
 	"663:\n\t"							\
 	newinstr "\n"							\
@@ -61,11 +74,17 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
 	".popsection\n\t"						\
 	".org	. - (664b-663b) + (662b-661b)\n\t"			\
 	".org	. - (662b-661b) + (664b-663b)\n"			\
+	".else\n\t"							\
+	"663:\n\t"							\
+	"664:\n\t"							\
+	".endif\n"							\
 	".endif\n"
 
 #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
 
+#define ALTERNATIVE_CB(oldinstr, cb) \
+	__ALTERNATIVE_CFG(oldinstr, "NOT_AN_INSTRUCTION", ARM_CB_PATCH, 1, cb)
 #else
 
 #include <asm/asm_defns.h>
@@ -126,6 +145,14 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
 663:
 .endm
 
+.macro alternative_cb cb
+	.set .Lasm_alt_mode, 0
+	.pushsection .altinstructions, "a"
+	altinstruction_entry 661f, \cb, ARM_CB_PATCH, 662f-661f, 0
+	.popsection
+661:
+.endm
+
 /*
  * Complete an alternative code sequence.
  */
@@ -135,6 +162,13 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
 	.org	. - (662b-661b) + (664b-663b)
 .endm
 
+/*
+ * Callback-based alternative epilogue
+ */
+.macro alternative_cb_end
+662:
+.endm
+
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 09/13] xen/arm64: Add generic assembly macros
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (7 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 08/13] xen/arm: alternatives: Add dynamic patching feature Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 10/13] xen/arm64: Implement a fast path for handling SMCCC_ARCH_WORKAROUND_2 Julien Grall
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

Add assembly macros to simplify assembly code:
    - adr_cpu_info: Get the address to the current cpu_info structure
    - ldr_this_cpu: Load a per-cpu value

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/include/asm-arm/arm64/macros.h | 25 +++++++++++++++++++++++++
 xen/include/asm-arm/macros.h       |  2 +-
 2 files changed, 26 insertions(+), 1 deletion(-)
 create mode 100644 xen/include/asm-arm/arm64/macros.h

diff --git a/xen/include/asm-arm/arm64/macros.h b/xen/include/asm-arm/arm64/macros.h
new file mode 100644
index 0000000000..9c5e676b37
--- /dev/null
+++ b/xen/include/asm-arm/arm64/macros.h
@@ -0,0 +1,25 @@
+#ifndef __ASM_ARM_ARM64_MACROS_H
+#define __ASM_ARM_ARM64_MACROS_H
+
+    /*
+     * @dst: Result of get_cpu_info()
+     */
+    .macro  adr_cpu_info, dst
+    add     \dst, sp, #STACK_SIZE
+    and     \dst, \dst, #~(STACK_SIZE - 1)
+    sub     \dst, \dst, #CPUINFO_sizeof
+    .endm
+
+    /*
+     * @dst: Result of READ_ONCE(per_cpu(sym, smp_processor_id()))
+     * @sym: The name of the per-cpu variable
+     * @tmp: scratch register
+     */
+    .macro  ldr_this_cpu, dst, sym, tmp
+    ldr     \dst, =per_cpu__\sym
+    mrs     \tmp, tpidr_el2
+    ldr     \dst, [\dst, \tmp]
+    .endm
+
+#endif /* __ASM_ARM_ARM64_MACROS_H */
+
diff --git a/xen/include/asm-arm/macros.h b/xen/include/asm-arm/macros.h
index 5d837cb38b..1d4bb41d15 100644
--- a/xen/include/asm-arm/macros.h
+++ b/xen/include/asm-arm/macros.h
@@ -8,7 +8,7 @@
 #if defined (CONFIG_ARM_32)
 # include <asm/arm32/macros.h>
 #elif defined(CONFIG_ARM_64)
-/* No specific ARM64 macros for now */
+# include <asm/arm64/macros.h>
 #else
 # error "unknown ARM variant"
 #endif
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 10/13] xen/arm64: Implement a fast path for handling SMCCC_ARCH_WORKAROUND_2
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (8 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 09/13] xen/arm64: Add generic assembly macros Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 11/13] xen/arm: Kconfig: Move HARDEN_BRANCH_PREDICTOR under "Architecture features" Julien Grall
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

The function ARM_SMCCC_ARCH_WORKAROUND_2 will be called by the guest for
enabling/disabling the ssbd mitigation. So we want the handling to
be as fast as possible.

The new sequence will forward guest's ARCH_WORKAROUND_2 call to EL3 and
also track the state of the workaround per-vCPU.

Note that since we need to execute branches, this always executes after
the spectre-v2 mitigation.

This code is based on KVM counterpart "arm64: KVM: Handle guest's
ARCH_WORKAROUND_2 requests" written by Marc Zyngier.

This is part of XSA-263.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Add Stefano's reviewed-by

    Changes in v2:
        - Combine the 2 consecutive eor instructions in a single one.
---
 xen/arch/arm/arm64/asm-offsets.c |  2 ++
 xen/arch/arm/arm64/entry.S       | 42 +++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/cpuerrata.c         | 18 +++++++++++++++++
 3 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
index ce24e44473..f5c696d092 100644
--- a/xen/arch/arm/arm64/asm-offsets.c
+++ b/xen/arch/arm/arm64/asm-offsets.c
@@ -22,6 +22,7 @@
 void __dummy__(void)
 {
    OFFSET(UREGS_X0, struct cpu_user_regs, x0);
+   OFFSET(UREGS_X1, struct cpu_user_regs, x1);
    OFFSET(UREGS_LR, struct cpu_user_regs, lr);
 
    OFFSET(UREGS_SP, struct cpu_user_regs, sp);
@@ -45,6 +46,7 @@ void __dummy__(void)
    BLANK();
 
    DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info));
+   OFFSET(CPUINFO_flags, struct cpu_info, flags);
 
    OFFSET(VCPU_arch_saved_context, struct vcpu, arch.saved_context);
 
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index e2344e565f..97b05f53ea 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -1,4 +1,6 @@
 #include <asm/asm_defns.h>
+#include <asm/current.h>
+#include <asm/macros.h>
 #include <asm/regs.h>
 #include <asm/alternative.h>
 #include <asm/smccc.h>
@@ -241,7 +243,7 @@ guest_sync:
          * be encoded as an immediate for cmp.
          */
         eor     w0, w0, #ARM_SMCCC_ARCH_WORKAROUND_1_FID
-        cbnz    w0, guest_sync_slowpath
+        cbnz    w0, check_wa2
 
         /*
          * Clobber both x0 and x1 to prevent leakage. Note that thanks
@@ -250,6 +252,44 @@ guest_sync:
         mov     x1, xzr
         eret
 
+check_wa2:
+        /* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
+        eor     w0, w0, #(ARM_SMCCC_ARCH_WORKAROUND_1_FID ^ ARM_SMCCC_ARCH_WORKAROUND_2_FID)
+        cbnz    w0, guest_sync_slowpath
+#ifdef CONFIG_ARM_SSBD
+alternative_cb arm_enable_wa2_handling
+        b       wa2_end
+alternative_cb_end
+        /* Sanitize the argument */
+        mov     x0, #-(UREGS_kernel_sizeof - UREGS_X1)  /* x0 := offset of guest's x1 on the stack */
+        ldr     x1, [sp, x0]                            /* Load guest's x1 */
+        cmp     w1, wzr
+        cset    x1, ne
+
+        /*
+         * Update the guest flag. At this stage sp point after the field
+         * guest_cpu_user_regs in cpu_info.
+         */
+        adr_cpu_info x2
+        ldr     x0, [x2, #CPUINFO_flags]
+        bfi     x0, x1, #CPUINFO_WORKAROUND_2_FLAG_SHIFT, #1
+        str     x0, [x2, #CPUINFO_flags]
+
+        /* Check that we actually need to perform the call */
+        ldr_this_cpu x0, ssbd_callback_required, x2
+        cbz     x0, wa2_end
+
+        mov     w0, #ARM_SMCCC_ARCH_WORKAROUND_2_FID
+        smc     #0
+
+wa2_end:
+        /* Don't leak data from the SMC call */
+        mov     x1, xzr
+        mov     x2, xzr
+        mov     x3, xzr
+#endif /* !CONFIG_ARM_SSBD */
+        mov     x0, xzr
+        eret
 guest_sync_slowpath:
         /*
          * x0/x1 may have been scratch by the fast path above, so avoid
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 4292008692..7455f09f26 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -7,6 +7,7 @@
 #include <xen/warning.h>
 #include <asm/cpufeature.h>
 #include <asm/cpuerrata.h>
+#include <asm/insn.h>
 #include <asm/psci.h>
 
 /* Override macros from asm/page.h to make them work with mfn_t */
@@ -272,6 +273,23 @@ static int __init parse_spec_ctrl(const char *s)
 }
 custom_param("spec-ctrl", parse_spec_ctrl);
 
+/* Arm64 only for now as for Arm32 the workaround is currently handled in C. */
+#ifdef CONFIG_ARM_64
+void __init arm_enable_wa2_handling(const struct alt_instr *alt,
+                                    const uint32_t *origptr,
+                                    uint32_t *updptr, int nr_inst)
+{
+    BUG_ON(nr_inst != 1);
+
+    /*
+     * Only allow mitigation on guest ARCH_WORKAROUND_2 if the SSBD
+     * state allow it to be flipped.
+     */
+    if ( get_ssbd_state() == ARM_SSBD_RUNTIME )
+        *updptr = aarch64_insn_gen_nop();
+}
+#endif
+
 /*
  * Assembly code may use the variable directly, so we need to make sure
  * it fits in a register.
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 11/13] xen/arm: Kconfig: Move HARDEN_BRANCH_PREDICTOR under "Architecture features"
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (9 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 10/13] xen/arm64: Implement a fast path for handling SMCCC_ARCH_WORKAROUND_2 Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 12/13] xen/arm: smccc: Fix indentation in ARM_SMCCC_ARCH_WORKAROUND_1_FID Julien Grall
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

At the moment, HARDEN_BRANCH_PREDICTOR is not in any section making
impossible for the user to unselect it.

Also, it looks like we require to use 'expert = "y"' for showing the
option in expert mode.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/Kconfig | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 0e2d027060..4212c58171 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -83,6 +83,23 @@ config ARM_SSBD
 
 	  If unsure, say Y.
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT = "y"
+	default y
+	help
+	  Speculation attacks against some high-performance processors rely on
+	  being able to manipulate the branch predictor for a victim context by
+	  executing aliasing branches in the attacker context.  Such attacks
+	  can be partially mitigated against by clearing internal branch
+	  predictor state and limiting the prediction logic in some situations.
+
+	  This config option will take CPU-specific actions to harden the
+	  branch predictor against aliasing attacks and may rely on specific
+	  instruction sequences or control bits being set by the system
+	  firmware.
+
+	  If unsure, say Y.
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
@@ -197,23 +214,6 @@ config ARM64_ERRATUM_834220
 
 endmenu
 
-config HARDEN_BRANCH_PREDICTOR
-	bool "Harden the branch predictor against aliasing attacks" if EXPERT
-	default y
-	help
-	  Speculation attacks against some high-performance processors rely on
-	  being able to manipulate the branch predictor for a victim context by
-	  executing aliasing branches in the attacker context.  Such attacks
-	  can be partially mitigated against by clearing internal branch
-	  predictor state and limiting the prediction logic in some situations.
-
-	  This config option will take CPU-specific actions to harden the
-	  branch predictor against aliasing attacks and may rely on specific
-	  instruction sequences or control bits being set by the system
-	  firmware.
-
-	  If unsure, say Y.
-
 config ARM64_HARDEN_BRANCH_PREDICTOR
     def_bool y if ARM_64 && HARDEN_BRANCH_PREDICTOR
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 12/13] xen/arm: smccc: Fix indentation in ARM_SMCCC_ARCH_WORKAROUND_1_FID
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (10 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 11/13] xen/arm: Kconfig: Move HARDEN_BRANCH_PREDICTOR under "Architecture features" Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-12 11:36 ` [PATCH v3 13/13] xen/arm: Avoid to use current everywhere in enter_hypervisor_head Julien Grall
  2018-06-22  2:01 ` [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's acked-by
---
 xen/include/asm-arm/smccc.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
index a6804cec99..74c13f8419 100644
--- a/xen/include/asm-arm/smccc.h
+++ b/xen/include/asm-arm/smccc.h
@@ -254,9 +254,9 @@ struct arm_smccc_res {
 
 #define ARM_SMCCC_ARCH_WORKAROUND_1_FID             \
     ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,         \
-                      ARM_SMCCC_CONV_32,            \
-                      ARM_SMCCC_OWNER_ARCH,         \
-                      0x8000)
+                       ARM_SMCCC_CONV_32,           \
+                       ARM_SMCCC_OWNER_ARCH,        \
+                       0x8000)
 
 #define ARM_SMCCC_ARCH_WORKAROUND_2_FID             \
     ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,         \
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 13/13] xen/arm: Avoid to use current everywhere in enter_hypervisor_head
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (11 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 12/13] xen/arm: smccc: Fix indentation in ARM_SMCCC_ARCH_WORKAROUND_1_FID Julien Grall
@ 2018-06-12 11:36 ` Julien Grall
  2018-06-22  2:01 ` [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-12 11:36 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, Julien Grall, sstabellini

Using current is fairly expensive, so save up into a variable.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/traps.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 315fc61f77..bde303261e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2024,8 +2024,10 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
 {
     if ( guest_mode(regs) )
     {
+        struct vcpu *v = current;
+
         /* If the guest has disabled the workaround, bring it back on. */
-        if ( needs_ssbd_flip(current) )
+        if ( needs_ssbd_flip(v) )
             arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
 
         /*
@@ -2034,8 +2036,8 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
          * but the crucial bit is "On taking a vSError interrupt, HCR_EL2.VSE
          * (alias of HCR.VA) is cleared to 0."
          */
-        if ( current->arch.hcr_el2 & HCR_VA )
-            current->arch.hcr_el2 = READ_SYSREG(HCR_EL2);
+        if ( v->arch.hcr_el2 & HCR_VA )
+            v->arch.hcr_el2 = READ_SYSREG(HCR_EL2);
 
 #ifdef CONFIG_NEW_VGIC
         /*
@@ -2045,11 +2047,11 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
          * TODO: Investigate whether this is necessary to do on every
          * trap and how it can be optimised.
          */
-        vtimer_update_irqs(current);
-        vcpu_update_evtchn_irq(current);
+        vtimer_update_irqs(v);
+        vcpu_update_evtchn_irq(v);
 #endif
 
-        vgic_sync_from_lrs(current);
+        vgic_sync_from_lrs(v);
     }
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing
  2018-06-12 11:36 ` [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing Julien Grall
@ 2018-06-12 16:23   ` Stefano Stabellini
  0 siblings, 0 replies; 21+ messages in thread
From: Stefano Stabellini @ 2018-06-12 16:23 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, sstabellini, andre.przywara

On Tue, 12 Jun 2018, Julien Grall wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery
> mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a config
> option.
> 
> This is part of XSA-263.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>     Changes in v3:
>         - required should be false when then mitigation is not required
>         on a given CPU
> 
>     Changes in v2:
>         - Add the switch in this patch rather than the next one.
>         - s/supported/required/
> ---
>  xen/arch/arm/Kconfig             | 10 +++++++
>  xen/arch/arm/cpuerrata.c         | 58 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpuerrata.h  | 21 +++++++++++++++
>  xen/include/asm-arm/cpufeature.h |  3 ++-
>  xen/include/asm-arm/smccc.h      |  7 +++++
>  5 files changed, 98 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 8174c0c635..0e2d027060 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -73,6 +73,16 @@ config SBSA_VUART_CONSOLE
>  	  Allows a guest to use SBSA Generic UART as a console. The
>  	  SBSA Generic UART implements a subset of ARM PL011 UART.
>  
> +config ARM_SSBD
> +	bool "Speculative Store Bypass Disable" if EXPERT = "y"
> +	depends on HAS_ALTERNATIVE
> +	default y
> +	help
> +	  This enables mitigation of bypassing of previous stores by speculative
> +	  loads.
> +
> +	  If unsure, say Y.
> +
>  endmenu
>  
>  menu "ARM errata workaround via the alternative framework"
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 1baa20654b..1a6130406c 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -235,6 +235,58 @@ static int enable_ic_inv_hardening(void *data)
>  
>  #endif
>  
> +#ifdef CONFIG_ARM_SSBD
> +
> +/*
> + * Assembly code may use the variable directly, so we need to make sure
> + * it fits in a register.
> + */
> +DEFINE_PER_CPU_READ_MOSTLY(register_t, ssbd_callback_required);
> +
> +static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
> +{
> +    struct arm_smccc_res res;
> +    bool required;
> +
> +    if ( smccc_ver < SMCCC_VERSION(1, 1) )
> +        return false;
> +
> +    /*
> +     * The probe function return value is either negative (unsupported
> +     * or mitigated), positive (unaffected), or zero (requires
> +     * mitigation). We only need to do anything in the last case.
> +     */
> +    arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FID,
> +                      ARM_SMCCC_ARCH_WORKAROUND_2_FID, &res);
> +
> +    switch ( (int)res.a0 )
> +    {
> +    case ARM_SMCCC_NOT_SUPPORTED:
> +        return false;
> +
> +    case ARM_SMCCC_NOT_REQUIRED:
> +        return false;
> +
> +    case ARM_SMCCC_SUCCESS:
> +        required = true;
> +        break;
> +
> +    case 1: /* Mitigation not required on this CPU. */
> +        required = false;
> +        break;
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        return false;
> +    }
> +
> +    if ( required )
> +        this_cpu(ssbd_callback_required) = 1;
> +
> +    return required;
> +}
> +#endif
> +
>  #define MIDR_RANGE(model, min, max)     \
>      .matches = is_affected_midr_range,  \
>      .midr_model = model,                \
> @@ -336,6 +388,12 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>          .enable = enable_ic_inv_hardening,
>      },
>  #endif
> +#ifdef CONFIG_ARM_SSBD
> +    {
> +        .capability = ARM_SSBD,
> +        .matches = has_ssbd_mitigation,
> +    },
> +#endif
>      {},
>  };
>  
> diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
> index 4e45b237c8..e628d3ff56 100644
> --- a/xen/include/asm-arm/cpuerrata.h
> +++ b/xen/include/asm-arm/cpuerrata.h
> @@ -27,9 +27,30 @@ static inline bool check_workaround_##erratum(void)             \
>  
>  CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32)
>  CHECK_WORKAROUND_HELPER(834220, ARM64_WORKAROUND_834220, CONFIG_ARM_64)
> +CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
>  
>  #undef CHECK_WORKAROUND_HELPER
>  
> +#ifdef CONFIG_ARM_SSBD
> +
> +#include <asm/current.h>
> +
> +DECLARE_PER_CPU(register_t, ssbd_callback_required);
> +
> +static inline bool cpu_require_ssbd_mitigation(void)
> +{
> +    return this_cpu(ssbd_callback_required);
> +}
> +
> +#else
> +
> +static inline bool cpu_require_ssbd_mitigation(void)
> +{
> +    return false;
> +}
> +
> +#endif
> +
>  #endif /* __ARM_CPUERRATA_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index e557a095af..2a5c075d3b 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -43,8 +43,9 @@
>  #define SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT 5
>  #define SKIP_CTXT_SWITCH_SERROR_SYNC 6
>  #define ARM_HARDEN_BRANCH_PREDICTOR 7
> +#define ARM_SSBD 8
>  
> -#define ARM_NCAPS           8
> +#define ARM_NCAPS           9
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
> index 8342cc33fe..a6804cec99 100644
> --- a/xen/include/asm-arm/smccc.h
> +++ b/xen/include/asm-arm/smccc.h
> @@ -258,7 +258,14 @@ struct arm_smccc_res {
>                        ARM_SMCCC_OWNER_ARCH,         \
>                        0x8000)
>  
> +#define ARM_SMCCC_ARCH_WORKAROUND_2_FID             \
> +    ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,         \
> +                       ARM_SMCCC_CONV_32,           \
> +                       ARM_SMCCC_OWNER_ARCH,        \
> +                       0x7FFF)
> +
>  /* SMCCC error codes */
> +#define ARM_SMCCC_NOT_REQUIRED          (-2)
>  #define ARM_SMCCC_ERR_UNKNOWN_FUNCTION  (-1)
>  #define ARM_SMCCC_NOT_SUPPORTED         (-1)
>  #define ARM_SMCCC_SUCCESS               (0)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation
  2018-06-12 11:36 ` [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation Julien Grall
@ 2018-06-12 16:24   ` Stefano Stabellini
  0 siblings, 0 replies; 21+ messages in thread
From: Stefano Stabellini @ 2018-06-12 16:24 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, sstabellini, andre.przywara

On Tue, 12 Jun 2018, Julien Grall wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2, it may be
> useful to either permanently enable or disable the workaround for cases
> where the user decides that they'd rather not get a trap overhead, and
> keep the mitigation permanently on or off instead of switching it on
> exception entry/exit. In any case, default to mitigation being enabled.
> 
> The new command line option is implemented as list of one option to
> follow x86 option and also allow to extend it more easily in the future.
> 
> Note that for convenience, the full implemention of the workaround is
> done in the .matches callback.
> 
> Lastly, a accessor is provided to know the state of the mitigation.
> 
> After this patch, there are 3 methods complementing each other to find the
> state of the mitigation:
>     - The capability ARM_SSBD indicates the platform is affected by the
>       vulnerability. This will also return false if the user decide to force
>       disabled the mitigation (spec-ctrl="ssbd=force-disable"). The
>       capability is useful for putting shortcut in place using alternative.
>     - ssbd_state indicates the global state of the mitigation (e.g
>       unknown, force enable...). The global state is required to report
>       the state to a guest.
>     - The per-cpu ssbd_callback_required indicates whether a pCPU
>       requires to call the SMC. This allows to shortcut SMC call
>       and save an entry/exit to EL3.
> 
> This is part of XSA-263.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>     Changes in v3:
>         - Fix typo in the name
> 
>     Changes in v2:
>         - Move out some code to the previous patch.
>         - Update the commit message with more background
> ---
>  docs/misc/xen-command-line.markdown | 18 ++++++++
>  xen/arch/arm/cpuerrata.c            | 88 ++++++++++++++++++++++++++++++++++---
>  xen/include/asm-arm/cpuerrata.h     | 21 +++++++++
>  3 files changed, 120 insertions(+), 7 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
> index 8712a833a2..962028b6ed 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -1756,6 +1756,24 @@ enforces the maximum theoretically necessary timeout of 670ms. Any number
>  is being interpreted as a custom timeout in milliseconds. Zero or boolean
>  false disable the quirk workaround, which is also the default.
>  
> +### spec-ctrl (Arm)
> +> `= List of [ ssbd=force-disable|runtime|force-enable ]`
> +
> +Controls for speculative execution sidechannel mitigations.
> +
> +The option `ssbd=` is used to control the state of Speculative Store
> +Bypass Disable (SSBD) mitigation.
> +
> +* `ssbd=force-disable` will keep the mitigation permanently off. The guest
> +will not be able to control the state of the mitigation.
> +* `ssbd=runtime` will always turn on the mitigation when running in the
> +hypervisor context. The guest will be to turn on/off the mitigation for
> +itself by using the firmware interface ARCH\_WORKAROUND\_2.
> +* `ssbd=force-enable` will keep the mitigation permanently on. The guest will
> +not be able to control the state of the mitigation.
> +
> +By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
> +
>  ### spec-ctrl (x86)
>  > `= List of [ <bool>, xen=<bool>, {pv,hvm,msr-sc,rsb}=<bool>,
>  >              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd}=<bool> ]`
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 1a6130406c..4292008692 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -237,6 +237,41 @@ static int enable_ic_inv_hardening(void *data)
>  
>  #ifdef CONFIG_ARM_SSBD
>  
> +enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
> +
> +static int __init parse_spec_ctrl(const char *s)
> +{
> +    const char *ss;
> +    int rc = 0;
> +
> +    do {
> +        ss = strchr(s, ',');
> +        if ( !ss )
> +            ss = strchr(s, '\0');
> +
> +        if ( !strncmp(s, "ssbd=", 5) )
> +        {
> +            s += 5;
> +
> +            if ( !strncmp(s, "force-disable", ss - s) )
> +                ssbd_state = ARM_SSBD_FORCE_DISABLE;
> +            else if ( !strncmp(s, "runtime", ss - s) )
> +                ssbd_state = ARM_SSBD_RUNTIME;
> +            else if ( !strncmp(s, "force-enable", ss - s) )
> +                ssbd_state = ARM_SSBD_FORCE_ENABLE;
> +            else
> +                rc = -EINVAL;
> +        }
> +        else
> +            rc = -EINVAL;
> +
> +        s = ss + 1;
> +    } while ( *ss );
> +
> +    return rc;
> +}
> +custom_param("spec-ctrl", parse_spec_ctrl);
> +
>  /*
>   * Assembly code may use the variable directly, so we need to make sure
>   * it fits in a register.
> @@ -251,20 +286,17 @@ static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
>      if ( smccc_ver < SMCCC_VERSION(1, 1) )
>          return false;
>  
> -    /*
> -     * The probe function return value is either negative (unsupported
> -     * or mitigated), positive (unaffected), or zero (requires
> -     * mitigation). We only need to do anything in the last case.
> -     */
>      arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FID,
>                        ARM_SMCCC_ARCH_WORKAROUND_2_FID, &res);
>  
>      switch ( (int)res.a0 )
>      {
>      case ARM_SMCCC_NOT_SUPPORTED:
> +        ssbd_state = ARM_SSBD_UNKNOWN;
>          return false;
>  
>      case ARM_SMCCC_NOT_REQUIRED:
> +        ssbd_state = ARM_SSBD_MITIGATED;
>          return false;
>  
>      case ARM_SMCCC_SUCCESS:
> @@ -280,8 +312,49 @@ static bool has_ssbd_mitigation(const struct arm_cpu_capabilities *entry)
>          return false;
>      }
>  
> -    if ( required )
> -        this_cpu(ssbd_callback_required) = 1;
> +    switch ( ssbd_state )
> +    {
> +    case ARM_SSBD_FORCE_DISABLE:
> +    {
> +        static bool once = true;
> +
> +        if ( once )
> +            printk("%s disabled from command-line\n", entry->desc);
> +        once = false;
> +
> +        arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL);
> +        required = false;
> +
> +        break;
> +    }
> +
> +    case ARM_SSBD_RUNTIME:
> +        if ( required )
> +        {
> +            this_cpu(ssbd_callback_required) = 1;
> +            arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
> +        }
> +
> +        break;
> +
> +    case ARM_SSBD_FORCE_ENABLE:
> +    {
> +        static bool once = true;
> +
> +        if ( once )
> +            printk("%s forced from command-line\n", entry->desc);
> +        once = false;
> +
> +        arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 1, NULL);
> +        required = true;
> +
> +        break;
> +    }
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        return false;
> +    }
>  
>      return required;
>  }
> @@ -390,6 +463,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>  #endif
>  #ifdef CONFIG_ARM_SSBD
>      {
> +        .desc = "Speculative Store Bypass Disabled",
>          .capability = ARM_SSBD,
>          .matches = has_ssbd_mitigation,
>      },
> diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
> index e628d3ff56..55ddfda272 100644
> --- a/xen/include/asm-arm/cpuerrata.h
> +++ b/xen/include/asm-arm/cpuerrata.h
> @@ -31,10 +31,26 @@ CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
>  
>  #undef CHECK_WORKAROUND_HELPER
>  
> +enum ssbd_state
> +{
> +    ARM_SSBD_UNKNOWN,
> +    ARM_SSBD_FORCE_DISABLE,
> +    ARM_SSBD_RUNTIME,
> +    ARM_SSBD_FORCE_ENABLE,
> +    ARM_SSBD_MITIGATED,
> +};
> +
>  #ifdef CONFIG_ARM_SSBD
>  
>  #include <asm/current.h>
>  
> +extern enum ssbd_state ssbd_state;
> +
> +static inline enum ssbd_state get_ssbd_state(void)
> +{
> +    return ssbd_state;
> +}
> +
>  DECLARE_PER_CPU(register_t, ssbd_callback_required);
>  
>  static inline bool cpu_require_ssbd_mitigation(void)
> @@ -49,6 +65,11 @@ static inline bool cpu_require_ssbd_mitigation(void)
>      return false;
>  }
>  
> +static inline enum ssbd_state get_ssbd_state(void)
> +{
> +    return ARM_SSBD_UNKNOWN;
> +}
> +
>  #endif
>  
>  #endif /* __ARM_CPUERRATA_H__ */
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region
  2018-06-12 11:36 ` [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region Julien Grall
@ 2018-06-12 21:17   ` Konrad Rzeszutek Wilk
  2018-06-12 22:06     ` Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-12 21:17 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, sstabellini, andre.przywara

On Tue, Jun 12, 2018 at 12:36:37PM +0100, Julien Grall wrote:
> During the MMU setup process, Xen will set SCTLR_EL2.WNX
> (Write-Non-eXecutable) bit. Because of that, the alternative code need
> to re-mapped the region in a difference place in order to modify the
> text section.
> 
> At the moment, the function patching the code is only aware of the
> re-mapped region. This requires the caller to mess with Xen internal in
> order to have function such as is_active_kernel_text() working.
> 
> All the interactions with Xen internal can be removed by specifying the
> offset between the region patch and the writable region for updating the
> instruction
> 
> This simplification will also make it easier to integrate dynamic patching
> in a follow-up patch. Indeed, the callback address should be in
> an original region and not re-mapped only which is writeable non-executable.
> 
> This is part of XSA-263.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> ---
> 
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
>     Changes in v3:
>         - Add stefano's reviewed-by
> 
>     Changes in v2:
>         - Add commit message
>         - Remove comment in the code that does not make sense anymore
> ---
>  xen/arch/arm/alternative.c | 42 +++++++++++++-----------------------------
>  1 file changed, 13 insertions(+), 29 deletions(-)
> 
> diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
> index 9ffdc475d6..936cf04956 100644
> --- a/xen/arch/arm/alternative.c
> +++ b/xen/arch/arm/alternative.c
> @@ -97,12 +97,16 @@ static u32 get_alt_insn(const struct alt_instr *alt,
>  /*
>   * The region patched should be read-write to allow __apply_alternatives
>   * to replacing the instructions when necessary.
> + *
> + * @update_offset: Offset between the region patched and the writable
> + * region for the update. 0 if the patched region is writable.
>   */
> -static int __apply_alternatives(const struct alt_region *region)
> +static int __apply_alternatives(const struct alt_region *region,
> +                                paddr_t update_offset)
>  {
>      const struct alt_instr *alt;
> -    const u32 *replptr;
> -    u32 *origptr;
> +    const u32 *replptr, *origptr;
> +    u32 *updptr;
>  
>      printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n",
>             region->begin, region->end);
> @@ -118,6 +122,7 @@ static int __apply_alternatives(const struct alt_region *region)
>          BUG_ON(alt->alt_len != alt->orig_len);
>  
>          origptr = ALT_ORIG_PTR(alt);
> +        updptr = (void *)origptr + update_offset;
>          replptr = ALT_REPL_PTR(alt);
>  
>          nr_inst = alt->alt_len / sizeof(insn);
> @@ -125,7 +130,7 @@ static int __apply_alternatives(const struct alt_region *region)
>          for ( i = 0; i < nr_inst; i++ )
>          {
>              insn = get_alt_insn(alt, origptr + i, replptr + i);
> -            *(origptr + i) = cpu_to_le32(insn);
> +            *(updptr + i) = cpu_to_le32(insn);
>          }
>  
>          /* Ensure the new instructions reached the memory and nuke */
> @@ -162,9 +167,6 @@ static int __apply_alternatives_multi_stop(void *unused)
>          paddr_t xen_size = _end - _start;
>          unsigned int xen_order = get_order_from_bytes(xen_size);
>          void *xenmap;
> -        struct virtual_region patch_region = {
> -            .list = LIST_HEAD_INIT(patch_region.list),
> -        };
>  
>          BUG_ON(patched);
>  
> @@ -177,31 +179,13 @@ static int __apply_alternatives_multi_stop(void *unused)
>          /* Re-mapping Xen is not expected to fail during boot. */
>          BUG_ON(!xenmap);
>  
> -        /*
> -         * If we generate a new branch instruction, the target will be
> -         * calculated in this re-mapped Xen region. So we have to register
> -         * this re-mapped Xen region as a virtual region temporarily.

What about this?

Won't this mean the traps (if there are any) won't be recognized at all
during this patching?

> -         */
> -        patch_region.start = xenmap;
> -        patch_region.end = xenmap + xen_size;
> -        register_virtual_region(&patch_region);
> +        region.begin = __alt_instructions;
> +        region.end = __alt_instructions_end;
>  
> -        /*
> -         * Find the virtual address of the alternative region in the new
> -         * mapping.
> -         * alt_instr contains relative offset, so the function
> -         * __apply_alternatives will patch in the re-mapped version of
> -         * Xen.
> -         */
> -        region.begin = (void *)__alt_instructions - (void *)_start + xenmap;
> -        region.end = (void *)__alt_instructions_end - (void *)_start + xenmap;
> -
> -        ret = __apply_alternatives(&region);
> +        ret = __apply_alternatives(&region, xenmap - (void *)_start);
>          /* The patching is not expected to fail during boot. */
>          BUG_ON(ret != 0);
>  
> -        unregister_virtual_region(&patch_region);
> -
>          vunmap(xenmap);
>  
>          /* Barriers provided by the cache flushing */
> @@ -235,7 +219,7 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
>          .end = end,
>      };
>  
> -    return __apply_alternatives(&region);
> +    return __apply_alternatives(&region, 0);
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region
  2018-06-12 21:17   ` Konrad Rzeszutek Wilk
@ 2018-06-12 22:06     ` Julien Grall
  2018-06-22 13:33       ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2018-06-12 22:06 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, nd, sstabellini, andre.przywara

Hi Konrad,

On 12/06/2018 22:17, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 12, 2018 at 12:36:37PM +0100, Julien Grall wrote:
>> During the MMU setup process, Xen will set SCTLR_EL2.WNX
>> (Write-Non-eXecutable) bit. Because of that, the alternative code need
>> to re-mapped the region in a difference place in order to modify the
>> text section.
>>
>> At the moment, the function patching the code is only aware of the
>> re-mapped region. This requires the caller to mess with Xen internal in
>> order to have function such as is_active_kernel_text() working.
>>
>> All the interactions with Xen internal can be removed by specifying the
>> offset between the region patch and the writable region for updating the
>> instruction
>>
>> This simplification will also make it easier to integrate dynamic patching
>> in a follow-up patch. Indeed, the callback address should be in
>> an original region and not re-mapped only which is writeable non-executable.
>>
>> This is part of XSA-263.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> ---
>>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>
>>      Changes in v3:
>>          - Add stefano's reviewed-by
>>
>>      Changes in v2:
>>          - Add commit message
>>          - Remove comment in the code that does not make sense anymore
>> ---
>>   xen/arch/arm/alternative.c | 42 +++++++++++++-----------------------------
>>   1 file changed, 13 insertions(+), 29 deletions(-)
>>
>> diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
>> index 9ffdc475d6..936cf04956 100644
>> --- a/xen/arch/arm/alternative.c
>> +++ b/xen/arch/arm/alternative.c
>> @@ -97,12 +97,16 @@ static u32 get_alt_insn(const struct alt_instr *alt,
>>   /*
>>    * The region patched should be read-write to allow __apply_alternatives
>>    * to replacing the instructions when necessary.
>> + *
>> + * @update_offset: Offset between the region patched and the writable
>> + * region for the update. 0 if the patched region is writable.
>>    */
>> -static int __apply_alternatives(const struct alt_region *region)
>> +static int __apply_alternatives(const struct alt_region *region,
>> +                                paddr_t update_offset)
>>   {
>>       const struct alt_instr *alt;
>> -    const u32 *replptr;
>> -    u32 *origptr;
>> +    const u32 *replptr, *origptr;
>> +    u32 *updptr;
>>   
>>       printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n",
>>              region->begin, region->end);
>> @@ -118,6 +122,7 @@ static int __apply_alternatives(const struct alt_region *region)
>>           BUG_ON(alt->alt_len != alt->orig_len);
>>   
>>           origptr = ALT_ORIG_PTR(alt);
>> +        updptr = (void *)origptr + update_offset;
>>           replptr = ALT_REPL_PTR(alt);
>>   
>>           nr_inst = alt->alt_len / sizeof(insn);
>> @@ -125,7 +130,7 @@ static int __apply_alternatives(const struct alt_region *region)
>>           for ( i = 0; i < nr_inst; i++ )
>>           {
>>               insn = get_alt_insn(alt, origptr + i, replptr + i);
>> -            *(origptr + i) = cpu_to_le32(insn);
>> +            *(updptr + i) = cpu_to_le32(insn);
>>           }
>>   
>>           /* Ensure the new instructions reached the memory and nuke */
>> @@ -162,9 +167,6 @@ static int __apply_alternatives_multi_stop(void *unused)
>>           paddr_t xen_size = _end - _start;
>>           unsigned int xen_order = get_order_from_bytes(xen_size);
>>           void *xenmap;
>> -        struct virtual_region patch_region = {
>> -            .list = LIST_HEAD_INIT(patch_region.list),
>> -        };
>>   
>>           BUG_ON(patched);
>>   
>> @@ -177,31 +179,13 @@ static int __apply_alternatives_multi_stop(void *unused)
>>           /* Re-mapping Xen is not expected to fail during boot. */
>>           BUG_ON(!xenmap);
>>   
>> -        /*
>> -         * If we generate a new branch instruction, the target will be
>> -         * calculated in this re-mapped Xen region. So we have to register
>> -         * this re-mapped Xen region as a virtual region temporarily.
> 
> What about this?
> 
> Won't this mean the traps (if there are any) won't be recognized at all
> during this patching?

What do you mean by recognized? This new region will only be accessed to 
write instruction. The only potential fault on that region is a data abort.

So I am not sure why we would need to register it as a virtual region here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263)
  2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
                   ` (12 preceding siblings ...)
  2018-06-12 11:36 ` [PATCH v3 13/13] xen/arm: Avoid to use current everywhere in enter_hypervisor_head Julien Grall
@ 2018-06-22  2:01 ` Julien Grall
  13 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-22  2:01 UTC (permalink / raw)
  To: xen-devel; +Cc: andre.przywara, sstabellini



On 06/12/2018 12:36 PM, Julien Grall wrote:
> Hi all,
> 
> This patch series implement the Xen hypervisor side of the "Spectre-v4"
> (CVE-2018-3639) mitigation known as "Speculative Store Bypass Disable"
> (SSBD).
> 
> More information can be found at:
>    https://bugs.chromium.org/p/project-zero/issues/detail?id=1528
>    https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability
> 
> For all released Arm Cortex-A that are affected by this issue, then the
> preferred mitigation is simply to set a chicken bit in the firmware during
> CPU initialization and therefore no change to Xen is required. Other CPUs
> may require the chicken bit to be toggled dynamically (for example, when
> switching between kernel-mode and hypervisor-mode) and this is achieve by
> calling into EL3 via an SMC which has been published as part of the latest
> SMCCC specification:
>    https://developer.arm.com/cache-speculation-vulnerability-firmware-specification
> 
> as well as an ATF update for the released ARM cores affected by SSBD:
>    https://github.com/ARM-software/arm-trusted-firmware/pull/1392
> 
> These patches provide the following:
>    1. Safe probing of firmware to establish which CPUs in the system
>       require calling into EL3 as part of the mitigation
>    2. A command-line option to force SSBD mitigation to be always on,
>       always off, or dynamically toggled (default) for CPUs that require
>       the EL3 call.
>    3. An initial implementation of the call via Xen, which exposes the
>       mitigation to the guest via an HVC interface.
> 
> This patch also provides bug fix and new infrastructure require to implement
> the mitigation:
>    1. Zeroed each vCPU stack
>    2. Provide generic assembly macros
>    3. Provide alternative callback (RFC)
> 
> A branch can be found with all the patches at:
>    https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
>    branch ssbd/v3

I have merged the series into my next branch. I will merge it once the 
tree is opened.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region
  2018-06-12 22:06     ` Julien Grall
@ 2018-06-22 13:33       ` Konrad Rzeszutek Wilk
  2018-06-25  9:07         ` Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-22 13:33 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, nd, sstabellini, andre.przywara

On Tue, Jun 12, 2018 at 11:06:16PM +0100, Julien Grall wrote:
> Hi Konrad,
> 
> On 12/06/2018 22:17, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jun 12, 2018 at 12:36:37PM +0100, Julien Grall wrote:
> > > During the MMU setup process, Xen will set SCTLR_EL2.WNX
> > > (Write-Non-eXecutable) bit. Because of that, the alternative code need
> > > to re-mapped the region in a difference place in order to modify the
> > > text section.
> > > 
> > > At the moment, the function patching the code is only aware of the
> > > re-mapped region. This requires the caller to mess with Xen internal in
> > > order to have function such as is_active_kernel_text() working.
> > > 
> > > All the interactions with Xen internal can be removed by specifying the
> > > offset between the region patch and the writable region for updating the
> > > instruction
> > > 
> > > This simplification will also make it easier to integrate dynamic patching
> > > in a follow-up patch. Indeed, the callback address should be in
> > > an original region and not re-mapped only which is writeable non-executable.
> > > 
> > > This is part of XSA-263.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > > 
> > > ---
> > > 
> > > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > 
> > >      Changes in v3:
> > >          - Add stefano's reviewed-by
> > > 
> > >      Changes in v2:
> > >          - Add commit message
> > >          - Remove comment in the code that does not make sense anymore
> > > ---
> > >   xen/arch/arm/alternative.c | 42 +++++++++++++-----------------------------
> > >   1 file changed, 13 insertions(+), 29 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
> > > index 9ffdc475d6..936cf04956 100644
> > > --- a/xen/arch/arm/alternative.c
> > > +++ b/xen/arch/arm/alternative.c
> > > @@ -97,12 +97,16 @@ static u32 get_alt_insn(const struct alt_instr *alt,
> > >   /*
> > >    * The region patched should be read-write to allow __apply_alternatives
> > >    * to replacing the instructions when necessary.
> > > + *
> > > + * @update_offset: Offset between the region patched and the writable
> > > + * region for the update. 0 if the patched region is writable.
> > >    */
> > > -static int __apply_alternatives(const struct alt_region *region)
> > > +static int __apply_alternatives(const struct alt_region *region,
> > > +                                paddr_t update_offset)
> > >   {
> > >       const struct alt_instr *alt;
> > > -    const u32 *replptr;
> > > -    u32 *origptr;
> > > +    const u32 *replptr, *origptr;
> > > +    u32 *updptr;
> > >       printk(XENLOG_INFO "alternatives: Patching with alt table %p -> %p\n",
> > >              region->begin, region->end);
> > > @@ -118,6 +122,7 @@ static int __apply_alternatives(const struct alt_region *region)
> > >           BUG_ON(alt->alt_len != alt->orig_len);
> > >           origptr = ALT_ORIG_PTR(alt);
> > > +        updptr = (void *)origptr + update_offset;
> > >           replptr = ALT_REPL_PTR(alt);
> > >           nr_inst = alt->alt_len / sizeof(insn);
> > > @@ -125,7 +130,7 @@ static int __apply_alternatives(const struct alt_region *region)
> > >           for ( i = 0; i < nr_inst; i++ )
> > >           {
> > >               insn = get_alt_insn(alt, origptr + i, replptr + i);
> > > -            *(origptr + i) = cpu_to_le32(insn);
> > > +            *(updptr + i) = cpu_to_le32(insn);
> > >           }
> > >           /* Ensure the new instructions reached the memory and nuke */
> > > @@ -162,9 +167,6 @@ static int __apply_alternatives_multi_stop(void *unused)
> > >           paddr_t xen_size = _end - _start;
> > >           unsigned int xen_order = get_order_from_bytes(xen_size);
> > >           void *xenmap;
> > > -        struct virtual_region patch_region = {
> > > -            .list = LIST_HEAD_INIT(patch_region.list),
> > > -        };
> > >           BUG_ON(patched);
> > > @@ -177,31 +179,13 @@ static int __apply_alternatives_multi_stop(void *unused)
> > >           /* Re-mapping Xen is not expected to fail during boot. */
> > >           BUG_ON(!xenmap);
> > > -        /*
> > > -         * If we generate a new branch instruction, the target will be
> > > -         * calculated in this re-mapped Xen region. So we have to register
> > > -         * this re-mapped Xen region as a virtual region temporarily.
> > 
> > What about this?
> > 
> > Won't this mean the traps (if there are any) won't be recognized at all
> > during this patching?
> 
> What do you mean by recognized? This new region will only be accessed to
> write instruction. The only potential fault on that region is a data abort.

Exactly the data abort.

My recollection is that the data abort would print a stack trace. And 
the stack trace needs symbol information - which it gets from virtual_region.

But if virtual_region is not registered then the stack won't contain the
name of the function that had an data abort as the patching is done.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region
  2018-06-22 13:33       ` Konrad Rzeszutek Wilk
@ 2018-06-25  9:07         ` Julien Grall
  0 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2018-06-25  9:07 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, nd, sstabellini, andre.przywara

Hi Konrad,

On 22/06/18 14:33, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 12, 2018 at 11:06:16PM +0100, Julien Grall wrote:
>> On 12/06/2018 22:17, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Jun 12, 2018 at 12:36:37PM +0100, Julien Grall wrote:
>>> Won't this mean the traps (if there are any) won't be recognized at all
>>> during this patching?
>>
>> What do you mean by recognized? This new region will only be accessed to
>> write instruction. The only potential fault on that region is a data abort.
> 
> Exactly the data abort.
> 
> My recollection is that the data abort would print a stack trace. And
> the stack trace needs symbol information - which it gets from virtual_region.
> 
> But if virtual_region is not registered then the stack won't contain the
> name of the function that had an data abort as the patching is done.

I am not sure to understand what you mean by "the name of the function 
that had a data abort". Could you give a bit more detail?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-06-25  9:07 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-12 11:36 [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall
2018-06-12 11:36 ` [PATCH v3 01/13] xen/arm: domain: Zero the per-vCPU cpu_info Julien Grall
2018-06-12 11:36 ` [PATCH v3 02/13] xen/arm64: entry: Use named label in guest_sync Julien Grall
2018-06-12 11:36 ` [PATCH v3 03/13] xen/arm: setup: Check errata for boot CPU later on Julien Grall
2018-06-12 11:36 ` [PATCH v3 04/13] xen/arm: Add ARCH_WORKAROUND_2 probing Julien Grall
2018-06-12 16:23   ` Stefano Stabellini
2018-06-12 11:36 ` [PATCH v3 05/13] xen/arm: Add command line option to control SSBD mitigation Julien Grall
2018-06-12 16:24   ` Stefano Stabellini
2018-06-12 11:36 ` [PATCH v3 06/13] xen/arm: Add ARCH_WORKAROUND_2 support for guests Julien Grall
2018-06-12 11:36 ` [PATCH v3 07/13] xen/arm: Simplify alternative patching of non-writable region Julien Grall
2018-06-12 21:17   ` Konrad Rzeszutek Wilk
2018-06-12 22:06     ` Julien Grall
2018-06-22 13:33       ` Konrad Rzeszutek Wilk
2018-06-25  9:07         ` Julien Grall
2018-06-12 11:36 ` [PATCH v3 08/13] xen/arm: alternatives: Add dynamic patching feature Julien Grall
2018-06-12 11:36 ` [PATCH v3 09/13] xen/arm64: Add generic assembly macros Julien Grall
2018-06-12 11:36 ` [PATCH v3 10/13] xen/arm64: Implement a fast path for handling SMCCC_ARCH_WORKAROUND_2 Julien Grall
2018-06-12 11:36 ` [PATCH v3 11/13] xen/arm: Kconfig: Move HARDEN_BRANCH_PREDICTOR under "Architecture features" Julien Grall
2018-06-12 11:36 ` [PATCH v3 12/13] xen/arm: smccc: Fix indentation in ARM_SMCCC_ARCH_WORKAROUND_1_FID Julien Grall
2018-06-12 11:36 ` [PATCH v3 13/13] xen/arm: Avoid to use current everywhere in enter_hypervisor_head Julien Grall
2018-06-22  2:01 ` [PATCH v3 00/13] xen/arm: SSBD (aka Spectre-v4) mitigation (XSA-263) Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.