All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors
@ 2017-03-31 13:07 Wei Chen
  2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
                   ` (18 more replies)
  0 siblings, 19 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

From XSA-201, we know that, a guest could trigger SErrors when accessing
memory mapped HW in a non-conventional way. In the patches for XSA-201,
we crash the guest when we captured such asynchronous aborts to avoid data
corruption.

In order to distinguish guest-generated SErrors from hypervisor-generated
SErrors. We have to place SError checking code in every EL1 -> EL2 paths.
That will be an overhead on entries caused by dsb/isb.

But not all platforms want to categorize the SErrors. For example, a host
that is running with trusted guests. The administrator can confirm that
all guests that are running on the host will not trigger such SErrors. In
this user scene, we should provide some options to administrator to avoid
categorizing the SErrors. And then reduce the overhead of dsb/isb.

We provided following 3 options to administrator to determine how to handle
the SErrors:

* `diverse`:
  The hypervisor will distinguish guest SErrors from hypervisor SErrors.
  The guest generated SErrors will be forwarded to guests, the hypervisor
  generated SErrors will cause the whole system crash.
  It requires:
  1. Place dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors
     correctly.
  2. Place dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor
     SErrors to guests.
  3. Place dsb/isb in context switch to isolate the SErrors between 2 vCPUs.

* `forward`:
  The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
  All SErrors will be forwarded to guests, except the SErrors generated when
  idle vCPU is running. The idle domain doesn't have the ability to hanle the
  SErrors, so we have to crash the whole system when we get SErros with idle
  vCPU. This option will avoid most overhead of the dsb/isb, except the dsb/isb
  in context switch which is used to isolate the SErrors between 2 vCPUs.

* `panic`:
  The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
  All SErrors will crash the whole system. This option will avoid all overhead
  of the dsb/isb.

---
v2->v3 changes has been placed in separated patchs.

Wei Chen (19):
  xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome
    check
  xen/arm: Introduce a helper to get default HCR_EL2 flags
  xen/arm: Set and restore HCR_EL2 register for each vCPU separately
  xen/arm: Avoid setting/clearing HCR_RW at every context switch
  xen/arm: Save HCR_EL2 when a guest took the SError
  xen/arm: Introduce a virtual abort injection helper
  xen/arm: Introduce a command line parameter for SErrors/Aborts
  xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
  xen/arm64: Use alternative to skip the check of pending serrors
  xen/arm32: Use alternative to skip the check of pending serrors
  xen/arm: Move macro VABORT_GEN_BY_GUEST to common header
  xen/arm: Introduce new helpers to handle guest/hyp SErrors
  xen/arm: Replace do_trap_guest_serror with new helpers
  xen/arm: Unmask the Abort/SError bit in the exception entries
  xen/arm: Introduce a helper to check local abort is enabled
  xen/arm: Introduce a macro to synchronize SError
  xen/arm: Isolate the SError between the context switch of 2 vCPUs
  xen/arm: Prevent slipping hypervisor SError to guest
  xen/arm: Handle guest external abort as guest SError

 docs/misc/xen-command-line.markdown   |  44 ++++++++
 xen/arch/arm/arm32/asm-offsets.c      |   1 +
 xen/arch/arm/arm32/entry.S            |  28 ++++-
 xen/arch/arm/arm32/traps.c            |   5 +-
 xen/arch/arm/arm64/asm-offsets.c      |   1 +
 xen/arch/arm/arm64/domctl.c           |   6 ++
 xen/arch/arm/arm64/entry.S            | 105 +++++++++----------
 xen/arch/arm/arm64/traps.c            |   2 +-
 xen/arch/arm/domain.c                 |  19 ++++
 xen/arch/arm/domain_build.c           |   7 ++
 xen/arch/arm/p2m.c                    |  10 +-
 xen/arch/arm/traps.c                  | 187 +++++++++++++++++++++++++++++-----
 xen/include/asm-arm/arm32/processor.h |  12 +--
 xen/include/asm-arm/arm32/system.h    |   7 ++
 xen/include/asm-arm/arm64/processor.h |   3 +-
 xen/include/asm-arm/arm64/system.h    |   7 ++
 xen/include/asm-arm/cpufeature.h      |   4 +-
 xen/include/asm-arm/domain.h          |   4 +
 xen/include/asm-arm/processor.h       |  28 ++++-
 19 files changed, 369 insertions(+), 111 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:08   ` Julien Grall
  2017-03-31 18:26   ` Stefano Stabellini
  2017-03-31 13:07 ` [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags Wei Chen
                   ` (17 subsequent siblings)
  18 siblings, 2 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

Xen will do exception syndrome check while some types of exception
take place in EL2. The syndrome check code read the ESR_EL2 register
directly, but in some situation this register maybe overridden by
nested exception.

For example, if we re-enable IRQ before reading ESR_EL2 which means
Xen may enter in IRQ exception mode and return the processor with
clobbered ESR_EL2 (See ARM ARM DDI 0487A.j D7.2.25)

In this case the guest exception syndrome has been overridden, we will
check the syndrome for guest sync exception with an incorrect ESR_EL2
value. So we want to save ESR_EL2 to cpu_user_regs as soon as the
exception takes place in EL2 to avoid using an incorrect syndrome value.

In order to save ESR_EL2, we added a 32-bit member hsr to cpu_user_regs.
But while saving registers in trap entry, we use stp to save ELR and
CPSR at the same time through 64-bit general registers. If we keep this
code, the hsr will be overridden by upper 32-bit of CPSR. So adjust the
code to use str to save ELR in a separate instruction and use stp to
save CPSR and HSR at the same time through 32-bit general registers.
This change affects the registers restore in trap exit, we can't use the
ldp to restore ELR and CPSR from stack at the same time. We have to use
ldr to restore them separately.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>

---
Note:
This patch is a bug fix, this bug affects the 4.8 and 4.7 source trees
too.

v2->v3:
1. Add note to the commit message.
2. Read ESR_EL2 value from vCPU context instead of reading from register
   directly in the places where use the ESR_EL2 value.
---
 xen/arch/arm/arm32/asm-offsets.c      |  1 +
 xen/arch/arm/arm32/entry.S            |  3 +++
 xen/arch/arm/arm64/asm-offsets.c      |  1 +
 xen/arch/arm/arm64/entry.S            | 13 +++++++++----
 xen/arch/arm/arm64/traps.c            |  2 +-
 xen/arch/arm/traps.c                  |  4 ++--
 xen/include/asm-arm/arm32/processor.h |  2 +-
 xen/include/asm-arm/arm64/processor.h |  3 +--
 8 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/arm32/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
index f8e6b53..5b543ab 100644
--- a/xen/arch/arm/arm32/asm-offsets.c
+++ b/xen/arch/arm/arm32/asm-offsets.c
@@ -26,6 +26,7 @@ void __dummy__(void)
    OFFSET(UREGS_lr, struct cpu_user_regs, lr);
    OFFSET(UREGS_pc, struct cpu_user_regs, pc);
    OFFSET(UREGS_cpsr, struct cpu_user_regs, cpsr);
+   OFFSET(UREGS_hsr, struct cpu_user_regs, hsr);
 
    OFFSET(UREGS_LR_usr, struct cpu_user_regs, lr_usr);
    OFFSET(UREGS_SP_usr, struct cpu_user_regs, sp_usr);
diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index 2a6f4f0..2187226 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -23,6 +23,9 @@
         add r11, sp, #UREGS_kernel_sizeof+4;                            \
         str r11, [sp, #UREGS_sp];                                       \
                                                                         \
+        mrc CP32(r11, HSR);             /* Save exception syndrome */   \
+        str r11, [sp, #UREGS_hsr];                                      \
+                                                                        \
         mrs r11, SPSR_hyp;                                              \
         str r11, [sp, #UREGS_cpsr];                                     \
         and r11, #PSR_MODE_MASK;                                        \
diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
index 69ea92a..ce24e44 100644
--- a/xen/arch/arm/arm64/asm-offsets.c
+++ b/xen/arch/arm/arm64/asm-offsets.c
@@ -27,6 +27,7 @@ void __dummy__(void)
    OFFSET(UREGS_SP, struct cpu_user_regs, sp);
    OFFSET(UREGS_PC, struct cpu_user_regs, pc);
    OFFSET(UREGS_CPSR, struct cpu_user_regs, cpsr);
+   OFFSET(UREGS_ESR_el2, struct cpu_user_regs, hsr);
 
    OFFSET(UREGS_SPSR_el1, struct cpu_user_regs, spsr_el1);
 
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index c181b5e..02802c0 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -121,9 +121,13 @@ lr      .req    x30             // link register
 
         stp     lr, x21, [sp, #UREGS_LR]
 
-        mrs     x22, elr_el2
-        mrs     x23, spsr_el2
-        stp     x22, x23, [sp, #UREGS_PC]
+        mrs     x21, elr_el2
+        str     x21, [sp, #UREGS_PC]
+
+        add     x21, sp, #UREGS_CPSR
+        mrs     x22, spsr_el2
+        mrs     x23, esr_el2
+        stp     w22, w23, [x21]
 
         .endm
 
@@ -307,7 +311,8 @@ ENTRY(return_to_new_vcpu64)
 return_from_trap:
         msr     daifset, #2 /* Mask interrupts */
 
-        ldp     x21, x22, [sp, #UREGS_PC]       // load ELR, SPSR
+        ldr     x21, [sp, #UREGS_PC]            // load ELR
+        ldr     w22, [sp, #UREGS_CPSR]          // load SPSR
 
         pop     x0, x1
         pop     x2, x3
diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
index 8e89376..36b3a30 100644
--- a/xen/arch/arm/arm64/traps.c
+++ b/xen/arch/arm/arm64/traps.c
@@ -32,7 +32,7 @@ static const char *handler[]= {
 
 asmlinkage void do_bad_mode(struct cpu_user_regs *regs, int reason)
 {
-    union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
+    union hsr hsr = { .bits = regs->hsr };
 
     printk("Bad mode in %s handler detected\n", handler[reason]);
     printk("ESR=0x%08"PRIx32":  EC=%"PRIx32", IL=%"PRIx32", ISS=%"PRIx32"\n",
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 614501f..6137272 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -843,7 +843,7 @@ static void _show_registers(struct cpu_user_regs *regs,
     printk("   HCR_EL2: %016"PRIregister"\n", READ_SYSREG(HCR_EL2));
     printk(" TTBR0_EL2: %016"PRIx64"\n", READ_SYSREG64(TTBR0_EL2));
     printk("\n");
-    printk("   ESR_EL2: %08"PRIx32"\n", READ_SYSREG32(ESR_EL2));
+    printk("   ESR_EL2: %08"PRIx32"\n", regs->hsr);
     printk(" HPFAR_EL2: %016"PRIregister"\n", READ_SYSREG(HPFAR_EL2));
 
 #ifdef CONFIG_ARM_32
@@ -2641,7 +2641,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
 
 asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 {
-    const union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
+    const union hsr hsr = { .bits = regs->hsr };
 
     enter_hypervisor_head(regs);
 
diff --git a/xen/include/asm-arm/arm32/processor.h b/xen/include/asm-arm/arm32/processor.h
index db3b17b..f6d5df3 100644
--- a/xen/include/asm-arm/arm32/processor.h
+++ b/xen/include/asm-arm/arm32/processor.h
@@ -37,7 +37,7 @@ struct cpu_user_regs
         uint32_t pc, pc32;
     };
     uint32_t cpsr; /* Return mode */
-    uint32_t pad0; /* Doubleword-align the kernel half of the frame */
+    uint32_t hsr;  /* Exception Syndrome */
 
     /* Outer guest frame only from here on... */
 
diff --git a/xen/include/asm-arm/arm64/processor.h b/xen/include/asm-arm/arm64/processor.h
index b0726ff..24f836b 100644
--- a/xen/include/asm-arm/arm64/processor.h
+++ b/xen/include/asm-arm/arm64/processor.h
@@ -66,8 +66,7 @@ struct cpu_user_regs
     /* Return address and mode */
     __DECL_REG(pc,           pc32);             /* ELR_EL2 */
     uint32_t cpsr;                              /* SPSR_EL2 */
-
-    uint32_t pad0; /* Align end of kernel frame. */
+    uint32_t hsr;                               /* ESR_EL2 */
 
     /* Outer guest frame only from here on... */
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
  2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:10   ` Julien Grall
  2017-03-31 18:29   ` Stefano Stabellini
  2017-03-31 13:07 ` [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately Wei Chen
                   ` (16 subsequent siblings)
  18 siblings, 2 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

We want to add HCR_EL2 register to Xen context switch. And each copy
of HCR_EL2 in vcpu structure will be initialized with the same set
of trap flags as the HCR_EL2 register. We introduce a helper here to
represent these flags to be reused easily.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
---
 xen/arch/arm/traps.c            | 11 ++++++++---
 xen/include/asm-arm/processor.h |  2 ++
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6137272..e6d88f2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -115,6 +115,13 @@ static void __init parse_vwfi(const char *s)
 }
 custom_param("vwfi", parse_vwfi);
 
+register_t get_default_hcr_flags(void)
+{
+    return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
+             (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
+             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB);
+}
+
 void init_traps(void)
 {
     /* Setup Hyp vector base */
@@ -139,9 +146,7 @@ void init_traps(void)
                  CPTR_EL2);
 
     /* Setup hypervisor traps */
-    WRITE_SYSREG(HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
-                 (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-                 HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB,HCR_EL2);
+    WRITE_SYSREG(get_default_hcr_flags(), HCR_EL2);
     isb();
 }
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index afc0e9a..4b6338b 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -708,6 +708,8 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1,
 
 void do_trap_guest_error(struct cpu_user_regs *regs);
 
+register_t get_default_hcr_flags(void);
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_ARM_PROCESSOR_H */
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
  2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
  2017-03-31 13:07 ` [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:11   ` Julien Grall
  2017-03-31 18:28   ` Stefano Stabellini
  2017-03-31 13:07 ` [PATCH v3 04/19] xen/arm: Avoid setting/clearing HCR_RW at every context switch Wei Chen
                   ` (15 subsequent siblings)
  18 siblings, 2 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

Different domains may have different HCR_EL2 flags. For example, the
64-bit domain needs HCR_RW flag but the 32-bit does not need it. So
we give each domain a default HCR_EL2 value and save it in the vCPU's
context.

HCR_EL2 register has only one bit can be updated automatically without
explicit write (HCR_VSE). But we haven't used this bit currently, so
we can consider that the HCR_EL2 register will not be modified while
the guest is running. So save the HCR_EL2 while guest exiting to
hypervisor is not neccessary. We just have to restore this register for
each vCPU while context switching.

The p2m_restore_state which will be invoked in context switch progress
has included the writing of HCR_EL2 already. It updates the HCR_EL2.RW
bit to tell the hardware how to interpret the stage-1 page table as the
encodings are different between AArch64 and AArch32. We can reuse this
write to restore the HCR_EL2 for each vCPU. Of course, the value of each
vCPU's HCR_EL2 should be adjusted to have proper HCR_EL2.RW bit in this
function. In the later patch of this series, we will set the HCR_EL2.RW
for each vCPU while the domain is creating.

Signed-off-by: wei chen <Wei.Chen@arm.com>
---
v2->v3:
1. Squash #2 #3 patchs of v2 into this patch.
2. Remove HCR_EL2 initialization of idle vCPU, it would be used.
3. Revert the change of vwfi.
4. Use the helper that introduced in previous patch to initialize
   HCR_EL2 for each vCPU.
5. Restore the initialization of HCR_EL2 in init_traps.
---
 xen/arch/arm/domain.c        | 2 ++
 xen/arch/arm/p2m.c           | 9 +++------
 xen/include/asm-arm/domain.h | 3 +++
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index bb327da..5d18bb0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -513,6 +513,8 @@ int vcpu_initialise(struct vcpu *v)
 
     v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
 
+    v->arch.hcr_el2 = get_default_hcr_flags();
+
     processor_vcpu_initialise(v);
 
     if ( (rc = vcpu_vgic_init(v)) != 0 )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6263760..83c4b7d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -128,27 +128,24 @@ void p2m_save_state(struct vcpu *p)
 
 void p2m_restore_state(struct vcpu *n)
 {
-    register_t hcr;
     struct p2m_domain *p2m = &n->domain->arch.p2m;
     uint8_t *last_vcpu_ran;
 
     if ( is_idle_vcpu(n) )
         return;
 
-    hcr = READ_SYSREG(HCR_EL2);
-
     WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
     isb();
 
     if ( is_32bit_domain(n->domain) )
-        hcr &= ~HCR_RW;
+        n->arch.hcr_el2 &= ~HCR_RW;
     else
-        hcr |= HCR_RW;
+        n->arch.hcr_el2 |= HCR_RW;
 
     WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1);
     isb();
 
-    WRITE_SYSREG(hcr, HCR_EL2);
+    WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2);
     isb();
 
     last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()];
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 09fe502..7b1dacc 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -204,6 +204,9 @@ struct arch_vcpu
     register_t tpidr_el1;
     register_t tpidrro_el0;
 
+    /* HYP configuration */
+    register_t hcr_el2;
+
     uint32_t teecr, teehbr; /* ThumbEE, 32-bit guests only */
 #ifdef CONFIG_ARM_32
     /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 04/19] xen/arm: Avoid setting/clearing HCR_RW at every context switch
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (2 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 05/19] xen/arm: Save HCR_EL2 when a guest took the SError Wei Chen
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

The HCR_EL2 flags for 64-bit and 32-bit domains are different. But
when we initialized the HCR_EL2 for vcpu0 of Dom0 and all vcpus of
DomU in vcpu_initialise, we didn't know the domain's address size
information. We had to use compatible flags to initialize HCR_EL2,
and set HCR_RW for 64-bit domain or clear HCR_RW for 32-bit domain
at every context switch.

But, after we added the HCR_EL2 to vcpu's context, this behaviour
seems a little fussy. We can update the HCR_RW bit in vcpu's context
as soon as we get the domain's address size to avoid setting/clearing
HCR_RW at every context switch.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Acked-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/arm64/domctl.c  | 6 ++++++
 xen/arch/arm/domain.c        | 5 +++++
 xen/arch/arm/domain_build.c  | 7 +++++++
 xen/arch/arm/p2m.c           | 5 -----
 xen/include/asm-arm/domain.h | 1 +
 5 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index 44e1e7b..ab8781f 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -14,6 +14,8 @@
 
 static long switch_mode(struct domain *d, enum domain_type type)
 {
+    struct vcpu *v;
+
     if ( d == NULL )
         return -EINVAL;
     if ( d->tot_pages != 0 )
@@ -23,6 +25,10 @@ static long switch_mode(struct domain *d, enum domain_type type)
 
     d->arch.type = type;
 
+    if ( is_64bit_domain(d) )
+        for_each_vcpu(d, v)
+            vcpu_switch_to_aarch64_mode(v);
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5d18bb0..69c2854 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -537,6 +537,11 @@ void vcpu_destroy(struct vcpu *v)
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
 }
 
+void vcpu_switch_to_aarch64_mode(struct vcpu *v)
+{
+    v->arch.hcr_el2 |= HCR_RW;
+}
+
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index de59e5f..3abacc0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2148,6 +2148,10 @@ int construct_dom0(struct domain *d)
         return -EINVAL;
     }
     d->arch.type = kinfo.type;
+
+    if ( is_64bit_domain(d) )
+        vcpu_switch_to_aarch64_mode(v);
+
 #endif
 
     allocate_memory(d, &kinfo);
@@ -2240,6 +2244,9 @@ int construct_dom0(struct domain *d)
             printk("Failed to allocate dom0 vcpu %d on pcpu %d\n", i, cpu);
             break;
         }
+
+        if ( is_64bit_domain(d) )
+            vcpu_switch_to_aarch64_mode(d->vcpu[i]);
     }
 
     return 0;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 83c4b7d..34d5776 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -137,11 +137,6 @@ void p2m_restore_state(struct vcpu *n)
     WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
     isb();
 
-    if ( is_32bit_domain(n->domain) )
-        n->arch.hcr_el2 &= ~HCR_RW;
-    else
-        n->arch.hcr_el2 |= HCR_RW;
-
     WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1);
     isb();
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7b1dacc..68185e2 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -268,6 +268,7 @@ struct arch_vcpu
 
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
+void vcpu_switch_to_aarch64_mode(struct vcpu *);
 
 unsigned int domain_max_vcpus(const struct domain *);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 05/19] xen/arm: Save HCR_EL2 when a guest took the SError
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (3 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 04/19] xen/arm: Avoid setting/clearing HCR_RW at every context switch Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper Wei Chen
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

The HCR_EL2.VSE (HCR.VA for aarch32) bit can be used to generate a
virtual abort to guest. The HCR_EL2.VSE bit has a peculiar feature
of getting cleared when the guest has taken the abort (this is the
only bit that behaves as such in HCR_EL2 register).

This means that if we set the HCR_EL2.VSE bit to signal such an abort,
we must preserve it in the guest context until it disappears from
HCR_EL2, and at which point it must be cleared from the context. This
is achieved by reading back from HCR_EL2 until the guest takes the
fault.

If we preserved a pending VSE in guest context, we have to restore
it to HCR_EL2 when context switch to this guest. This is achieved
by writing saved HCR_EL2 value in guest context back to HCR_EL2
register before return to guest. This had been done by the patch
of "Restore HCR_EL2 register".

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/traps.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e6d88f2..5f758e1 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2641,7 +2641,18 @@ static void do_trap_smc(struct cpu_user_regs *regs, const union hsr hsr)
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
 {
     if ( guest_mode(regs) )
+    {
+        /*
+         * If we pended a virtual abort, preserve it until it gets cleared.
+         * See ARM ARM DDI 0487A.j D1.14.3 (Virtual Interrupts) for details,
+         * but the crucial bit is "On taking a vSError interrupt, HCR_EL2.VSE
+         * (alias of HCR.VA) is cleared to 0."
+         */
+        if ( current->arch.hcr_el2 & HCR_VA )
+            current->arch.hcr_el2 = READ_SYSREG(HCR_EL2);
+
         gic_clear_lrs(current);
+    }
 }
 
 asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (4 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 05/19] xen/arm: Save HCR_EL2 when a guest took the SError Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:13   ` Julien Grall
  2017-03-31 13:07 ` [PATCH v3 07/19] xen/arm: Introduce a command line parameter for SErrors/Aborts Wei Chen
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

When guest triggers async aborts, in most platform, such aborts
will be routed to hypervisor. But we don't want the hypervisor
to handle such aborts, so we have to route such aborts back to
the guest.

This helper is using the HCR_EL2.VSE (HCR.VA for aarch32) bit to
route such aborts back to the guest. After updating HCR_EL2.VSE bit
in vCPU context, we write the value to HCR_EL2 immediately. In this
case we don't need to move the restoration of HCR_EL2 to other place,
and it worked regardless of whether we get preempted.

If the guest PC had been advanced by SVC/HVC/SMC instructions before
we caught the SError in hypervisor, we have to adjust the guest PC to
exact address while the SError generated.

About HSR_EC_SVC32/64, even thought we don't trap SVC32/64 today,
we would like them to be handled here. This would be useful when
VM introspection will gain support of SVC32/64 trapping.

After updating HCR_EL2.VSE bit of vCPU HCR_EL2, write the value
to HCR_EL2 immediately. In this case we don't need to move the
restoration of HCR_EL2 to leave_hypervisor_tail, and it worked
regardless of whether we get preempted.

This helper will be used by the later patches in this series, we
use #if 0 to disable it in this patch temporarily to remove the
warning message of unused function from compiler.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
v2->v3:
1. After updating HCR_EL2.VSE bit of vCPU HCR_EL2, write the value
   to HCR_EL2 immediately.
2. Explain in commit message why we write the value to HCR_EL2 after
   updating the HCR_EL2.VSE bit in vCPU context.
3. Add Stefano's Reviewed-by tag.
---
 xen/arch/arm/traps.c            | 33 +++++++++++++++++++++++++++++++++
 xen/include/asm-arm/processor.h |  1 +
 2 files changed, 34 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 5f758e1..b15923a 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -618,6 +618,39 @@ static void inject_dabt_exception(struct cpu_user_regs *regs,
 #endif
 }
 
+#if 0
+/* Inject a virtual Abort/SError into the guest. */
+static void inject_vabt_exception(struct cpu_user_regs *regs)
+{
+    const union hsr hsr = { .bits = regs->hsr };
+
+    /*
+     * SVC/HVC/SMC already have an adjusted PC (See ARM ARM DDI 0487A.j
+     * D1.10.1 for more details), which we need to correct in order to
+     * return to after having injected the SError.
+     */
+    switch ( hsr.ec )
+    {
+    case HSR_EC_SVC32:
+    case HSR_EC_HVC32:
+    case HSR_EC_SMC32:
+#ifdef CONFIG_ARM_64
+    case HSR_EC_SVC64:
+    case HSR_EC_HVC64:
+    case HSR_EC_SMC64:
+#endif
+        regs->pc -= hsr.len ? 4 : 2;
+        break;
+
+    default:
+        break;
+    }
+
+    current->arch.hcr_el2 |= HCR_VA;
+    WRITE_SYSREG(current->arch.hcr_el2, HCR_EL2);
+}
+#endif
+
 struct reg_ctxt {
     /* Guest-side state */
     uint32_t sctlr_el1;
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 4b6338b..d7b0711 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -252,6 +252,7 @@
 #define HSR_EC_HVC32                0x12
 #define HSR_EC_SMC32                0x13
 #ifdef CONFIG_ARM_64
+#define HSR_EC_SVC64                0x15
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 07/19] xen/arm: Introduce a command line parameter for SErrors/Aborts
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (5 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op Wei Chen
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

In order to distinguish guest-generated SErrors from hypervisor-generated
SErrors we have to place SError checking code in every EL1 <-> EL2 paths.
That will cause overhead on entries and exits due to dsb/isb.

However, not all platforms want to categorize SErrors. For example, a host
that is running with trusted guests. The administrator can confirm that
all guests that are running on the host will not trigger such SErrors. In
this use-case, we should provide some options to administrators to avoid
categorizing SErrors and then reduce the overhead of dsb/isb.

We provided the following 3 options to administrators to determine how
the hypervisors handle SErrors:

* `diverse`:
  The hypervisor will distinguish guest SErrors from hypervisor SErrors.
  The guest generated SErrors will be forwarded to guests, the hypervisor
  generated SErrors will cause the whole system to crash.
  It requires:
  1. dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors
     correctly.
  2. dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor
     SErrors to guests.
  3. dsb/isb in context switch to isolate SErrors between 2 vCPUs.

* `forward`:
  The hypervisor will not distinguish guest SErrors from hypervisor
  SErrors. All SErrors will be forwarded to guests, except the SErrors
  generated when  the idle vCPU is running. The idle domain doesn't have
  the ability to handle SErrors, so we have to crash the whole system when
  we get SErros with the idle vCPU. This option will avoid most overhead
  of the dsb/isb, except the dsb/isb in context switch which is used to
  isolate the SErrors between 2 vCPUs.

* `panic`:
  The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
  All SErrors will crash the whole system. This option will avoid all
  overhead of the dsb/isb pairs.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
v2->v3:
1. Replace "entries" to "entries and exits" in commit message and doc.
   because all options will take effect on entries and exits.
2. Fix a typo in commit message.
---
 docs/misc/xen-command-line.markdown | 44 +++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c                | 19 ++++++++++++++++
 2 files changed, 63 insertions(+)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 9eb85d6..9d42b6a 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1470,6 +1470,50 @@ enabling more sockets and cores to go into deeper sleep states.
 
 Set the serial transmit buffer size.
 
+### serrors (ARM)
+> `= diverse | forward | panic`
+
+> Default: `diverse`
+
+This parameter is provided to administrators to determine how the
+hypervisors handle SErrors.
+
+In order to distinguish guest-generated SErrors from hypervisor-generated
+SErrors we have to place SError checking code in every EL1 <-> EL2 paths.
+That will cause overhead on entries and exits due to dsb/isb. However, not all
+platforms need to categorize SErrors. For example, a host that is running with
+trusted guests. The administrator can confirm that all guests that are running
+on the host will not trigger such SErrors. In this case, the administrator can
+use this parameter to skip categorizing SErrors and reduce the overhead of
+dsb/isb.
+
+We provided the following 3 options to administrators to determine how the
+hypervisors handle SErrors:
+
+* `diverse`:
+  The hypervisor will distinguish guest SErrors from hypervisor SErrors.
+  The guest generated SErrors will be forwarded to guests, the hypervisor
+  generated SErrors will cause the whole system to crash.
+  It requires:
+  1. dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors correctly.
+  2. dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor
+     SErrors to guests.
+  3. dsb/isb in context switch to isolate SErrors between 2 vCPUs.
+
+* `forward`:
+  The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
+  All SErrors will be forwarded to guests, except the SErrors generated when
+  the idle vCPU is running. The idle domain doesn't have the ability to handle
+  SErrors, so we have to crash the whole system when we get SErros with the
+  idle vCPU. This option will avoid most overhead of the dsb/isb, except the
+  dsb/isb in context switch which is used to isolate the SErrors between 2
+  vCPUs.
+
+* `panic`:
+  The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
+  All SErrors will crash the whole system. This option will avoid all overhead
+  of the dsb/isb pairs.
+
 ### smap
 > `= <boolean> | hvm`
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b15923a..76cda59 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -122,6 +122,25 @@ register_t get_default_hcr_flags(void)
              HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB);
 }
 
+static enum {
+    SERRORS_DIVERSE,
+    SERRORS_FORWARD,
+    SERRORS_PANIC,
+} serrors_op;
+
+static void __init parse_serrors_behavior(const char *str)
+{
+    if ( !strcmp(str, "forward") )
+        serrors_op = SERRORS_FORWARD;
+    else if ( !strcmp(str, "panic") )
+        serrors_op = SERRORS_PANIC;
+    else
+        serrors_op = SERRORS_DIVERSE;
+
+    return;
+}
+custom_param("serrors", parse_serrors_behavior);
+
 void init_traps(void)
 {
     /* Setup Hyp vector base */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (6 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 07/19] xen/arm: Introduce a command line parameter for SErrors/Aborts Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:48   ` Julien Grall
  2017-03-31 13:07 ` [PATCH v3 09/19] xen/arm64: Use alternative to skip the check of pending serrors Wei Chen
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

In the later patches of this series, we want to use the alternative
patching framework to avoid checking serror_op in every entries.
So we define a new cpu feature "SKIP_CHECK_PENDING_VSERROR" for
serror_op. When serror_op is not equal to SERROR_DIVERSE, this
feature will be set to cpu_hwcaps.

Currently, the default serror_op is SERROR_DIVERSE, if we want to
change the serror_op value we have to place the serror parameter
in command line. It seems no problem to update cpu_hwcaps directly
in the serror parameter parsing function.

While the default option will be diverse today, this may change in the
future. So we introduce this initcall to guarantee the cpu_hwcaps can be
updated no matter the serror parameter is placed in the command line
or not.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Acked-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v2->v3
1. Rewrite the commit message to make it easer to understand.
2. Add Julien's Acked-by tag and Stefano's Reviewed-by tag.
---
 xen/arch/arm/traps.c             | 9 +++++++++
 xen/include/asm-arm/cpufeature.h | 3 ++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 76cda59..9d4ee39 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -141,6 +141,15 @@ static void __init parse_serrors_behavior(const char *str)
 }
 custom_param("serrors", parse_serrors_behavior);
 
+static int __init update_serrors_cpu_caps(void)
+{
+    if ( serrors_op != SERRORS_DIVERSE )
+        cpus_set_cap(SKIP_CHECK_PENDING_VSERROR);
+
+    return 0;
+}
+__initcall(update_serrors_cpu_caps);
+
 void init_traps(void)
 {
     /* Setup Hyp vector base */
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c0a25ae..ec3f9e5 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -40,8 +40,9 @@
 #define ARM32_WORKAROUND_766422 2
 #define ARM64_WORKAROUND_834220 3
 #define LIVEPATCH_FEATURE   4
+#define SKIP_CHECK_PENDING_VSERROR 5
 
-#define ARM_NCAPS           5
+#define ARM_NCAPS           6
 
 #ifndef __ASSEMBLY__
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 09/19] xen/arm64: Use alternative to skip the check of pending serrors
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (7 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 10/19] xen/arm32: " Wei Chen
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

We have provided an option to administrator to determine how to
handle the SErrors. In order to skip the check of pending SError,
in conventional way, we have to read the option every time before
we try to check the pending SError. This will add overhead to check
the option at every trap.

The ARM64 supports the alternative patching feature. We can use an
ALTERNATIVE to avoid checking option at every trap. We added a new
cpufeature named "SKIP_CHECK_PENDING_VSERROR". This feature will be
enabled when the option is not diverse.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/arm64/entry.S | 41 +++++++++++++++++++++++++----------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 02802c0..4baa3cb 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -1,5 +1,6 @@
 #include <asm/asm_defns.h>
 #include <asm/regs.h>
+#include <asm/alternative.h>
 #include <public/xen.h>
 
 /*
@@ -229,12 +230,14 @@ hyp_irq:
 
 guest_sync:
         entry   hyp=0, compat=0
-        bl      check_pending_vserror
         /*
-         * If x0 is Non-zero, a vSError took place, the initial exception
-         * doesn't have any significance to be handled. Exit ASAP
+         * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is
+         * not set. If a vSError took place, the initial exception will be
+         * skipped. Exit ASAP
          */
-        cbnz    x0, 1f
+        ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
+                    "nop; nop",
+                    SKIP_CHECK_PENDING_VSERROR)
         msr     daifclr, #2
         mov     x0, sp
         bl      do_trap_hypervisor
@@ -243,12 +246,14 @@ guest_sync:
 
 guest_irq:
         entry   hyp=0, compat=0
-        bl      check_pending_vserror
         /*
-         * If x0 is Non-zero, a vSError took place, the initial exception
-         * doesn't have any significance to be handled. Exit ASAP
+         * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is
+         * not set. If a vSError took place, the initial exception will be
+         * skipped. Exit ASAP
          */
-        cbnz    x0, 1f
+        ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
+                    "nop; nop",
+                    SKIP_CHECK_PENDING_VSERROR)
         mov     x0, sp
         bl      do_trap_irq
 1:
@@ -267,12 +272,14 @@ guest_error:
 
 guest_sync_compat:
         entry   hyp=0, compat=1
-        bl      check_pending_vserror
         /*
-         * If x0 is Non-zero, a vSError took place, the initial exception
-         * doesn't have any significance to be handled. Exit ASAP
+         * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is
+         * not set. If a vSError took place, the initial exception will be
+         * skipped. Exit ASAP
          */
-        cbnz    x0, 1f
+        ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
+                    "nop; nop",
+                    SKIP_CHECK_PENDING_VSERROR)
         msr     daifclr, #2
         mov     x0, sp
         bl      do_trap_hypervisor
@@ -281,12 +288,14 @@ guest_sync_compat:
 
 guest_irq_compat:
         entry   hyp=0, compat=1
-        bl      check_pending_vserror
         /*
-         * If x0 is Non-zero, a vSError took place, the initial exception
-         * doesn't have any significance to be handled. Exit ASAP
+         * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is
+         * not set. If a vSError took place, the initial exception will be
+         * skipped. Exit ASAP
          */
-        cbnz    x0, 1f
+        ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
+                    "nop; nop",
+                    SKIP_CHECK_PENDING_VSERROR)
         mov     x0, sp
         bl      do_trap_irq
 1:
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 10/19] xen/arm32: Use alternative to skip the check of pending serrors
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (8 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 09/19] xen/arm64: Use alternative to skip the check of pending serrors Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 11/19] xen/arm: Move macro VABORT_GEN_BY_GUEST to common header Wei Chen
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

We have provided an option to administrator to determine how to
handle the SErrors. In order to skip the check of pending SError,
in conventional way, we have to read the option every time before
we try to check the pending SError. This will add overhead to check
the option at every trap.

The ARM32 supports the alternative patching feature. We can use an
ALTERNATIVE to avoid checking option at every trap. We added a new
cpufeature named "SKIP_CHECK_PENDING_VSERROR". This feature will be
enabled when the option is not diverse.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>

---
Note:
This patch depends on the "xen/arm32: Introduce alternative runtime
patching" [1] which is still in review.

[1]https://lists.xenproject.org/archives/html/xen-devel/2017-03/msg04333.html
---
 xen/arch/arm/arm32/entry.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index 2187226..105cae8 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -1,5 +1,6 @@
 #include <asm/asm_defns.h>
 #include <asm/regs.h>
+#include <asm/alternative.h>
 #include <public/xen.h>
 
 #define SAVE_ONE_BANKED(reg)    mrs r11, reg; str r11, [sp, #UREGS_##reg]
@@ -44,6 +45,14 @@ save_guest_regs:
         SAVE_BANKED(fiq)
         SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
         SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
+
+        /*
+         * If the SKIP_CHECK_PENDING_VSERROR has been set in the cpu feature,
+         * the checking of pending SErrors will be skipped.
+         */
+        ALTERNATIVE("nop",
+                    "b skip_check",
+                    SKIP_CHECK_PENDING_VSERROR)
         /*
          * Start to check pending virtual abort in the gap of Guest -> HYP
          * world switch.
@@ -99,6 +108,7 @@ abort_guest_exit_end:
          */
         bne return_from_trap
 
+skip_check:
         mov pc, lr
 
 #define DEFINE_TRAP_ENTRY(trap)                                         \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 11/19] xen/arm: Move macro VABORT_GEN_BY_GUEST to common header
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (9 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 10/19] xen/arm32: " Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 12/19] xen/arm: Introduce new helpers to handle guest/hyp SErrors Wei Chen
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

We want to move part of SErrors checking code from hyp_error assembly code
to a function. This new function will use this macro to distinguish the
guest SErrors from hypervisor SErrors. So we have to move this macro to
common header.

The VABORT_GEN_BY_GUEST macro uses the symbols abort_guest_exit_start
and abort_guest_exit_end. After we move this macro to a common header,
we need to make sure that the two symbols are visible to other source
files. Currently, they are declared .global in arm32/entry.S, but not
arm64/entry.S. Fix that.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
v2->v3:
1. Modify the commit message as Stefano's suggestion.
2. Add Stefano's Reveiewed-by tag
---
 xen/arch/arm/arm64/entry.S            |  2 ++
 xen/include/asm-arm/arm32/processor.h | 10 ----------
 xen/include/asm-arm/processor.h       | 10 ++++++++++
 3 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 4baa3cb..113e1c3 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -380,10 +380,12 @@ check_pending_vserror:
          * exception handler, and the elr_el2 will be set to
          * abort_guest_exit_start or abort_guest_exit_end.
          */
+        .global abort_guest_exit_start
 abort_guest_exit_start:
 
         isb
 
+        .global abort_guest_exit_end
 abort_guest_exit_end:
         /* Mask PSTATE asynchronous abort bit, close the checking window. */
         msr     daifset, #4
diff --git a/xen/include/asm-arm/arm32/processor.h b/xen/include/asm-arm/arm32/processor.h
index f6d5df3..68cc821 100644
--- a/xen/include/asm-arm/arm32/processor.h
+++ b/xen/include/asm-arm/arm32/processor.h
@@ -56,16 +56,6 @@ struct cpu_user_regs
     uint32_t pad1; /* Doubleword-align the user half of the frame */
 };
 
-/* Functions for pending virtual abort checking window. */
-void abort_guest_exit_start(void);
-void abort_guest_exit_end(void);
-
-#define VABORT_GEN_BY_GUEST(r)  \
-( \
-    ( (unsigned long)abort_guest_exit_start == (r)->pc ) || \
-    ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
-)
-
 #endif
 
 /* Layout as used in assembly, with src/dest registers mixed in */
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index d7b0711..163c39c 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -711,6 +711,16 @@ void do_trap_guest_error(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+/* Functions for pending virtual abort checking window. */
+void abort_guest_exit_start(void);
+void abort_guest_exit_end(void);
+
+#define VABORT_GEN_BY_GUEST(r)  \
+( \
+    ( (unsigned long)abort_guest_exit_start == (r)->pc ) || \
+    ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
+)
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_ARM_PROCESSOR_H */
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 12/19] xen/arm: Introduce new helpers to handle guest/hyp SErrors
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (10 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 11/19] xen/arm: Move macro VABORT_GEN_BY_GUEST to common header Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 13/19] xen/arm: Replace do_trap_guest_serror with new helpers Wei Chen
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

Currently, ARM32 and ARM64 has different SError exception handlers.
These handlers include lots of code to check SError handle options
and code to distinguish guest-generated SErrors from hypervisor
SErrors.

The new helpers: do_trap_guest_serror and do_trap_hyp_serror are
wrappers of __do_trap_serror with constant guest/hyp parameters.
__do_trap_serror moves the option checking code and SError checking
code from assembly to C source. This will make the code become more
readable and avoid placing check code in too many places.

These two helpers only handle the following 3 types of SErrors:
1) Guest-generated SError and had been delivered in EL1 and then
   been forwarded to EL2.
2) Guest-generated SError but hadn't been delivered in EL1 before
   trapping to EL2. This SError would be caught in EL2 as soon as
   we just unmasked the PSTATE.A bit.
3) Hypervisor generated native SError, that would be a bug.

In the new helpers, we have used the function "inject_vabt_exception"
which was disabled by "#if 0" before. Now, we can remove the "#if 0"
to make this function to be available.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/traps.c            | 69 +++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/processor.h |  4 +++
 2 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 9d4ee39..4b53b7e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -646,7 +646,6 @@ static void inject_dabt_exception(struct cpu_user_regs *regs,
 #endif
 }
 
-#if 0
 /* Inject a virtual Abort/SError into the guest. */
 static void inject_vabt_exception(struct cpu_user_regs *regs)
 {
@@ -677,7 +676,59 @@ static void inject_vabt_exception(struct cpu_user_regs *regs)
     current->arch.hcr_el2 |= HCR_VA;
     WRITE_SYSREG(current->arch.hcr_el2, HCR_EL2);
 }
-#endif
+
+/*
+ * SError exception handler. We only handle the following 3 types of SErrors:
+ * 1) Guest-generated SError and had been delivered in EL1 and then
+ *    been forwarded to EL2.
+ * 2) Guest-generated SError but hadn't been delivered in EL1 before
+ *    trapping to EL2. This SError would be caught in EL2 as soon as
+ *    we just unmasked the PSTATE.A bit.
+ * 3) Hypervisor generated native SError, that would be a bug.
+ *
+ * A true parameter "guest" means that the SError is type#1 or type#2.
+ */
+static void __do_trap_serror(struct cpu_user_regs *regs, bool guest)
+{
+    /*
+     * Only "DIVERSE" option needs to distinguish the guest-generated SErrors
+     * from hypervisor SErrors.
+     */
+    if ( serrors_op == SERRORS_DIVERSE )
+    {
+        /* Forward the type#1 and type#2 SErrors to guests. */
+        if ( guest )
+            return inject_vabt_exception(regs);
+
+        /* Type#3 SErrors will panic the whole system */
+        goto crash_system;
+    }
+
+    /*
+     * The "FORWARD" option will forward all SErrors to the guests, except
+     * idle domain generated SErrors.
+     */
+    if ( serrors_op == SERRORS_FORWARD )
+    {
+        /*
+         * Because the idle domain doesn't have the ability to handle the
+         * SErrors, we have to crash the whole system while we get a SError
+         * generated by idle domain.
+         */
+        if ( is_idle_vcpu(current) )
+            goto crash_system;
+
+        return inject_vabt_exception(regs);
+    }
+
+crash_system:
+    /* Three possibilities to crash the whole system:
+     * 1) "DIVERSE" option with Hypervisor generated SErrors.
+     * 2) "FORWARD" option with Idle Domain generated SErrors.
+     * 3) "PANIC" option with all SErrors.
+     */
+    do_unexpected_trap("SError", regs);
+}
 
 struct reg_ctxt {
     /* Guest-side state */
@@ -2864,6 +2915,20 @@ asmlinkage void do_trap_guest_error(struct cpu_user_regs *regs)
     domain_crash_synchronous();
 }
 
+asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs)
+{
+    enter_hypervisor_head(regs);
+
+    __do_trap_serror(regs, VABORT_GEN_BY_GUEST(regs));
+}
+
+asmlinkage void do_trap_guest_serror(struct cpu_user_regs *regs)
+{
+    enter_hypervisor_head(regs);
+
+    __do_trap_serror(regs, true);
+}
+
 asmlinkage void do_trap_irq(struct cpu_user_regs *regs)
 {
     enter_hypervisor_head(regs);
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 163c39c..81227aa 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -709,6 +709,10 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1,
 
 void do_trap_guest_error(struct cpu_user_regs *regs);
 
+void do_trap_hyp_serror(struct cpu_user_regs *regs);
+
+void do_trap_guest_serror(struct cpu_user_regs *regs);
+
 register_t get_default_hcr_flags(void);
 
 /* Functions for pending virtual abort checking window. */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 13/19] xen/arm: Replace do_trap_guest_serror with new helpers
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (11 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 12/19] xen/arm: Introduce new helpers to handle guest/hyp SErrors Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 14/19] xen/arm: Unmask the Abort/SError bit in the exception entries Wei Chen
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

We have introduced two helpers to handle the guest/hyp SErrors:
do_trap_guest_serror and do_trap_guest_hyp_serror. These handlers
can take the role of do_trap_guest_serror and reduce the assembly
code in the same time. So we use these two helpers to replace it
and drop it now.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/arm32/traps.c      |  5 +----
 xen/arch/arm/arm64/entry.S      | 36 +++---------------------------------
 xen/arch/arm/traps.c            | 15 ---------------
 xen/include/asm-arm/processor.h |  2 --
 4 files changed, 4 insertions(+), 54 deletions(-)

diff --git a/xen/arch/arm/arm32/traps.c b/xen/arch/arm/arm32/traps.c
index 4176f0e..5bc5f64 100644
--- a/xen/arch/arm/arm32/traps.c
+++ b/xen/arch/arm/arm32/traps.c
@@ -62,10 +62,7 @@ asmlinkage void do_trap_prefetch_abort(struct cpu_user_regs *regs)
 
 asmlinkage void do_trap_data_abort(struct cpu_user_regs *regs)
 {
-    if ( VABORT_GEN_BY_GUEST(regs) )
-        do_trap_guest_error(regs);
-    else
-        do_unexpected_trap("Data Abort", regs);
+    do_trap_hyp_serror(regs);
 }
 
 /*
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 113e1c3..8d5a890 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -178,40 +178,10 @@ hyp_error_invalid:
         invalid BAD_ERROR
 
 hyp_error:
-        /*
-         * Only two possibilities:
-         * 1) Either we come from the exit path, having just unmasked
-         *    PSTATE.A: change the return code to an EL2 fault, and
-         *    carry on, as we're already in a sane state to handle it.
-         * 2) Or we come from anywhere else, and that's a bug: we panic.
-         */
         entry   hyp=1
         msr     daifclr, #2
-
-        /*
-         * The ELR_EL2 may be modified by an interrupt, so we have to use the
-         * saved value in cpu_user_regs to check whether we come from 1) or
-         * not.
-         */
-        ldr     x0, [sp, #UREGS_PC]
-        adr     x1, abort_guest_exit_start
-        cmp     x0, x1
-        adr     x1, abort_guest_exit_end
-        ccmp    x0, x1, #4, ne
         mov     x0, sp
-        mov     x1, #BAD_ERROR
-
-        /*
-         * Not equal, the exception come from 2). It's a bug, we have to
-         * panic the hypervisor.
-         */
-        b.ne    do_bad_mode
-
-        /*
-         * Otherwise, the exception come from 1). It happened because of
-         * the guest. Crash this guest.
-         */
-        bl      do_trap_guest_error
+        bl      do_trap_hyp_serror
         exit    hyp=1
 
 /* Traps taken in Current EL with SP_ELx */
@@ -267,7 +237,7 @@ guest_error:
         entry   hyp=0, compat=0
         msr     daifclr, #2
         mov     x0, sp
-        bl      do_trap_guest_error
+        bl      do_trap_guest_serror
         exit    hyp=0, compat=0
 
 guest_sync_compat:
@@ -309,7 +279,7 @@ guest_error_compat:
         entry   hyp=0, compat=1
         msr     daifclr, #2
         mov     x0, sp
-        bl      do_trap_guest_error
+        bl      do_trap_guest_serror
         exit    hyp=0, compat=1
 
 ENTRY(return_to_new_vcpu32)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 4b53b7e..1c5cfb8 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2900,21 +2900,6 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
     }
 }
 
-asmlinkage void do_trap_guest_error(struct cpu_user_regs *regs)
-{
-    enter_hypervisor_head(regs);
-
-    /*
-     * Currently, to ensure hypervisor safety, when we received a
-     * guest-generated vSerror/vAbort, we just crash the guest to protect
-     * the hypervisor. In future we can better handle this by injecting
-     * a vSerror/vAbort to the guest.
-     */
-    gdprintk(XENLOG_WARNING, "Guest(Dom-%u) will be crashed by vSError\n",
-             current->domain->domain_id);
-    domain_crash_synchronous();
-}
-
 asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs)
 {
     enter_hypervisor_head(regs);
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 81227aa..bb24bee 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -707,8 +707,6 @@ void vcpu_regs_user_to_hyp(struct vcpu *vcpu,
 int call_smc(register_t function_id, register_t arg0, register_t arg1,
              register_t arg2);
 
-void do_trap_guest_error(struct cpu_user_regs *regs);
-
 void do_trap_hyp_serror(struct cpu_user_regs *regs);
 
 void do_trap_guest_serror(struct cpu_user_regs *regs);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 14/19] xen/arm: Unmask the Abort/SError bit in the exception entries
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (12 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 13/19] xen/arm: Replace do_trap_guest_serror with new helpers Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 13:07 ` [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled Wei Chen
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

Currently, we masked the Abort/SError bit in Xen exception entries.
So Xen could not capture any Abort/SError while it's running.
Now, Xen has the ability to handle the Abort/SError, we should unmask
the Abort/SError bit by default to let Xen capture Abort/SError while
it's running.

But in order to avoid receiving nested asynchronous abort, we don't
unmask Abort/SError bit in hyp_error and trap_data_abort.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/arm32/entry.S | 15 ++++++++++++++-
 xen/arch/arm/arm64/entry.S | 13 ++++++++-----
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index 105cae8..592e4a8 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -116,6 +116,7 @@ skip_check:
 trap_##trap:                                                            \
         SAVE_ALL;                                                       \
         cpsie i;        /* local_irq_enable */                          \
+        cpsie a;        /* asynchronous abort enable */                 \
         adr lr, return_from_trap;                                       \
         mov r0, sp;                                                     \
         mov r11, sp;                                                    \
@@ -126,6 +127,18 @@ trap_##trap:                                                            \
         ALIGN;                                                          \
 trap_##trap:                                                            \
         SAVE_ALL;                                                       \
+        cpsie a;        /* asynchronous abort enable */                 \
+        adr lr, return_from_trap;                                       \
+        mov r0, sp;                                                     \
+        mov r11, sp;                                                    \
+        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
+        b do_trap_##trap
+
+#define DEFINE_TRAP_ENTRY_NOABORT(trap)                                 \
+        ALIGN;                                                          \
+trap_##trap:                                                            \
+        SAVE_ALL;                                                       \
+        cpsie i;        /* local_irq_enable */                          \
         adr lr, return_from_trap;                                       \
         mov r0, sp;                                                     \
         mov r11, sp;                                                    \
@@ -146,10 +159,10 @@ GLOBAL(hyp_traps_vector)
 DEFINE_TRAP_ENTRY(undefined_instruction)
 DEFINE_TRAP_ENTRY(supervisor_call)
 DEFINE_TRAP_ENTRY(prefetch_abort)
-DEFINE_TRAP_ENTRY(data_abort)
 DEFINE_TRAP_ENTRY(hypervisor)
 DEFINE_TRAP_ENTRY_NOIRQ(irq)
 DEFINE_TRAP_ENTRY_NOIRQ(fiq)
+DEFINE_TRAP_ENTRY_NOABORT(data_abort)
 
 return_from_trap:
         mov sp, r11
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 8d5a890..0401a41 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -187,13 +187,14 @@ hyp_error:
 /* Traps taken in Current EL with SP_ELx */
 hyp_sync:
         entry   hyp=1
-        msr     daifclr, #2
+        msr     daifclr, #6
         mov     x0, sp
         bl      do_trap_hypervisor
         exit    hyp=1
 
 hyp_irq:
         entry   hyp=1
+        msr     daifclr, #4
         mov     x0, sp
         bl      do_trap_irq
         exit    hyp=1
@@ -208,7 +209,7 @@ guest_sync:
         ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
                     "nop; nop",
                     SKIP_CHECK_PENDING_VSERROR)
-        msr     daifclr, #2
+        msr     daifclr, #6
         mov     x0, sp
         bl      do_trap_hypervisor
 1:
@@ -224,6 +225,7 @@ guest_irq:
         ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
                     "nop; nop",
                     SKIP_CHECK_PENDING_VSERROR)
+        msr     daifclr, #4
         mov     x0, sp
         bl      do_trap_irq
 1:
@@ -235,7 +237,7 @@ guest_fiq_invalid:
 
 guest_error:
         entry   hyp=0, compat=0
-        msr     daifclr, #2
+        msr     daifclr, #6
         mov     x0, sp
         bl      do_trap_guest_serror
         exit    hyp=0, compat=0
@@ -250,7 +252,7 @@ guest_sync_compat:
         ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
                     "nop; nop",
                     SKIP_CHECK_PENDING_VSERROR)
-        msr     daifclr, #2
+        msr     daifclr, #6
         mov     x0, sp
         bl      do_trap_hypervisor
 1:
@@ -266,6 +268,7 @@ guest_irq_compat:
         ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f",
                     "nop; nop",
                     SKIP_CHECK_PENDING_VSERROR)
+        msr     daifclr, #4
         mov     x0, sp
         bl      do_trap_irq
 1:
@@ -277,7 +280,7 @@ guest_fiq_invalid_compat:
 
 guest_error_compat:
         entry   hyp=0, compat=1
-        msr     daifclr, #2
+        msr     daifclr, #6
         mov     x0, sp
         bl      do_trap_guest_serror
         exit    hyp=0, compat=1
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (13 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 14/19] xen/arm: Unmask the Abort/SError bit in the exception entries Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:25   ` Julien Grall
  2017-03-31 18:43   ` Stefano Stabellini
  2017-03-31 13:07 ` [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError Wei Chen
                   ` (3 subsequent siblings)
  18 siblings, 2 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

In previous patch, we have umasked the Abort/SError bit for Xen
in most of its running time. So in some use-cases, we have to
check whether the abort is enabled in current context. For example,
while we want to synchronize SErrors, we have to confirm the abort
is enabled. Otherwise synchronize SErrors is pointless.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
---
 xen/include/asm-arm/arm32/system.h | 7 +++++++
 xen/include/asm-arm/arm64/system.h | 7 +++++++
 2 files changed, 14 insertions(+)

diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index c617b40..6c5b9f5 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -35,6 +35,13 @@ static inline int local_irq_is_enabled(void)
     return !(flags & PSR_IRQ_MASK);
 }
 
+static inline int local_abort_is_enabled(void)
+{
+    unsigned long flags;
+    local_save_flags(flags);
+    return !(flags & PSR_ABT_MASK);
+}
+
 #define local_fiq_enable()  __asm__("cpsie f   @ __stf\n" : : : "memory", "cc")
 #define local_fiq_disable() __asm__("cpsid f   @ __clf\n" : : : "memory", "cc")
 
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index 2e2ee21..309485f 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -58,6 +58,13 @@ static inline int local_fiq_is_enabled(void)
     return !(flags & PSR_FIQ_MASK);
 }
 
+static inline int local_abort_is_enabled(void)
+{
+    unsigned long flags;
+    local_save_flags(flags);
+    return !(flags & PSR_ABT_MASK);
+}
+
 #endif
 /*
  * Local variables:
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (14 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:33   ` Julien Grall
  2017-03-31 18:36   ` Stefano Stabellini
  2017-03-31 13:07 ` [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs Wei Chen
                   ` (2 subsequent siblings)
  18 siblings, 2 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

In previous patches, we have provided the ability to synchronize
SErrors in exception entries. But we haven't synchronized SErrors
while returning to guest and doing context switch.

So we still have two risks:
1. Slipping hypervisor SErrors to guest. For example, hypervisor
   triggers a SError while returning to guest, but this SError may be
   delivered after entering guest. In "DIVERSE" option, this SError
   would be routed back to guest and panic the guest. But actually,
   we should crash the whole system due to this hypervisor SError.
2. Slipping previous guest SErrors to the next guest. In "FORWARD"
   option, if hypervisor triggers a SError while context switching.
   This SError may be delivered after switching to next vCPU. In this
   case, this SError will be forwarded to next vCPU and may panic
   an incorrect guest.

So we have have to introduce this macro to synchronize SErrors while
returning to guest and doing context switch.

We also added a barrier to this macro to prevent compiler reorder our
asm volatile code.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
---
v2->v3:
1. Use a macro to replace function to synchronize SErrors.
2. Add barrier to avoid compiler reorder our code.

Note:
I had followed Julien's suggestion to add the Assert in to macro,
But I found I always hit the Assert. Because when option == DIVERSE,
the SKIP_CHECK_PENDING_VSERROR feature would not be set into cpu_hwcaps.
And in the later patch, I will use this feature to skip synchronize
SErrors before returning to guest.
The cpus_has_cap(SKIP_CHECK_PENDING_VSERROR) will always false.
And hit the ASSERT.

And about the local_abort enable check, should we disable the abort
before synchronizing SErrors while returning to guest or doing context
switch? Just like in these two places we have disable the IRQ.

For this testing, I have apply this series to latest staging tree.

...
(XEN) Command line: console=dtuart dtuart=serial0 conswitch=x loglvl=all
dom0_mem=8G dom0_max_vcpus=8 serrors=diverse
(XEN) Placing Xen at 0x00000083fee00000-0x00000083ff000000
...
(XEN) ----SYNCHRONIZE_SERROR ASSERT 0 1
(XEN) Assertion 'cpus_have_cap(5) && local_abort_is_enabled()' failed at
traps.c:2954
(XEN) ----[ Xen-4.9-unstable  arm64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) PC:     000000000025c09c leave_hypervisor_tail+0xa8/0x100
(XEN) LR:     000000000025c078
(XEN) SP:     00008003fac07e80
(XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 000000000000005a  X1: 00000000ffffffff  X2:
0000000000000000
(XEN)      X3: 0000000000000000  X4: 0000000000000010  X5:
0000000000000000
(XEN)      X6: 00008003ffe50000  X7: 0000000000000001  X8:
00000000fffffffd
(XEN)      X9: 000000000000000a X10: 0000000000000031 X11:
00008003fac07bf8
(XEN)     X12: 0000000000000001 X13: 000000000026c370 X14:
0000000000000020
(XEN)     X15: 0000000000000000 X16: 00000083fff42fc0 X17:
00000000fffffffe
(XEN)     X18: 0000000000000000 X19: 0000000000292c58 X20:
0000000000290028
(XEN)     X21: 00000000002ea000 X22: 0000000000000000 X23:
0000000000000000
(XEN)     X24: 0000000000000000 X25: 0000000000000000 X26:
0000000000000000
(XEN)     X27: 0000000000000000 X28: 0000000000000000  FP:
00008003fac07e80
(XEN)
(XEN)   VTCR_EL2: 80043594
(XEN)  VTTBR_EL2: 00010083fd036000
(XEN)
(XEN)  SCTLR_EL2: 30cd183d
(XEN)    HCR_EL2: 000000008038663f
(XEN)  TTBR0_EL2: 00000083fef0e000
(XEN)
(XEN)    ESR_EL2: f2000001
(XEN)  HPFAR_EL2: 0000000000000000
(XEN)    FAR_EL2: 0000000000000000
(XEN)
(XEN) Xen stack trace from sp=00008003fac07e80:
(XEN)    00008003fac07ea0 0000000000262934 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000249cac 0000008048000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000008040080000
00000000000001c5
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000
0000000000000000
(XEN) Xen call trace:
(XEN)    [<000000000025c09c>] leave_hypervisor_tail+0xa8/0x100 (PC)
(XEN)    [<000000000025c078>] leave_hypervisor_tail+0x84/0x100 (LR)
(XEN)    [<0000000000262934>] return_to_new_vcpu64+0x4/0x30
(XEN)    [<0000000000249cac>] domain.c#continue_new_vcpu+0/0xa4
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'cpus_have_cap(5) && local_abort_is_enabled()' failed at
traps.c:2954
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
---
 xen/include/asm-arm/processor.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index bb24bee..a787d1b 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -723,6 +723,17 @@ void abort_guest_exit_end(void);
     ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
 )
 
+/*
+ * Synchronize SError unless the feature is selected.
+ * This is relying on the SErrors are currently unmasked.
+ */
+#define SYNCHRONIZE_SERROR(feat)                                 \
+    do {                                                         \
+        asm volatile(ALTERNATIVE("dsb sy; isb",                  \
+                                 "nop; nop", feat)               \
+                                 : : : "memory");                \
+    } while (0)
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_ARM_PROCESSOR_H */
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (15 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:38   ` Julien Grall
  2017-03-31 13:07 ` [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest Wei Chen
  2017-03-31 13:07 ` [PATCH v3 19/19] xen/arm: Handle guest external abort as guest SError Wei Chen
  18 siblings, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

If there is a pending SError while we are doing context switch, if the
SError handle option is "FORWARD", We have to guranatee this serror to
be caught by current vCPU, otherwise it will be caught by next vCPU and
be forwarded to this wrong vCPU.

So we have to synchronize SError before switch to next vCPU. But this is
only required by "FORWARD" option. In this case we added a new flag
SKIP_CTXT_SWITCH_SERROR_SYNC in cpu_hwcaps to skip synchronizing SError
in context switch for other options. In the meantime, we don't need to
export serror_op accessing to other source files.

Because we have umasked the Abort/SError bit in previous patch, we have
to disable Abort/SError before doing context switch as we have done for
IRQ.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>

---
v2->v3:
1. Use the macro instead of the function to synchronize SErrors.
2. Disable Abort/SError before doing context switch.
---
 xen/arch/arm/domain.c            | 12 ++++++++++++
 xen/arch/arm/traps.c             |  3 +++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 69c2854..e1a620a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -29,6 +29,7 @@
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
 #include <asm/procinfo.h>
+#include <asm/alternative.h>
 
 #include <asm/gic.h>
 #include <asm/vgic.h>
@@ -312,6 +313,17 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
     local_irq_disable();
 
+    /*
+     * If the SErrors option is "FORWARD", we have to prevent forwarding
+     * serror to wrong vCPU. So before context switch, we have to use the
+     * synchronize_serror to guarantee that the pending serror would be
+     * caught by current vCPU.
+     *
+     * The SKIP_CTXT_SWITCH_SERROR_SYNC will be set to cpu_hwcaps when the
+     * SErrors option is NOT "FORWARD".
+     */
+    SYNCHRONIZE_SERROR(SKIP_CTXT_SWITCH_SERROR_SYNC);
+
     set_current(next);
 
     prev = __context_switch(prev, next);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 1c5cfb8..2c610c4 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -146,6 +146,9 @@ static int __init update_serrors_cpu_caps(void)
     if ( serrors_op != SERRORS_DIVERSE )
         cpus_set_cap(SKIP_CHECK_PENDING_VSERROR);
 
+    if ( serrors_op != SERRORS_FORWARD )
+        cpus_set_cap(SKIP_CTXT_SWITCH_SERROR_SYNC);
+
     return 0;
 }
 __initcall(update_serrors_cpu_caps);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index ec3f9e5..99ecb2b 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -41,8 +41,9 @@
 #define ARM64_WORKAROUND_834220 3
 #define LIVEPATCH_FEATURE   4
 #define SKIP_CHECK_PENDING_VSERROR 5
+#define SKIP_CTXT_SWITCH_SERROR_SYNC 6
 
-#define ARM_NCAPS           6
+#define ARM_NCAPS           7
 
 #ifndef __ASSEMBLY__
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (16 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  2017-03-31 14:46   ` Julien Grall
  2017-03-31 13:07 ` [PATCH v3 19/19] xen/arm: Handle guest external abort as guest SError Wei Chen
  18 siblings, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

If there is a pending SError while we're returning from trap. If the
SError handle option is "DIVERSE", we have to prevent slipping this
hypervisor SError to guest. So we have to use the dsb/isb to guarantee
that the pending hypervisor SError would be caught in hypervisor before
return to guest.

In previous patch, we will set SKIP_CHECK_PENDING_VSERROR to cpu_hwcaps
when option is NOT "DIVERSE". This means we can use the alternative to
skip synchronizing SErrors for other SErrors handle options.

Because we have umasked the Abort/SError bit in previous patch. We have
to disable the Abort/SError before returning to guest as we have done
for IRQ.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>

---
v2->v3:
1. Use alternative instead of check serror_op to skip sychronizing SErrors
   while option is NOT "DIVERSE".
2. Disable Abort/SError before returning to guest.
---
 xen/arch/arm/traps.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 2c610c4..8f1a0cd 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2936,6 +2936,19 @@ asmlinkage void leave_hypervisor_tail(void)
         local_irq_disable();
         if (!softirq_pending(smp_processor_id())) {
             gic_inject();
+
+            /*
+             * If the SErrors handle option is "DIVERSE", we have to prevent
+             * slipping the hypervisor SError to guest. In this option, before
+             * returning from trap, we have to synchronize SErrors to guarantee
+             * that the pending SError would be caught in hypervisor.
+             *
+             * If option is NOT "DIVERSE", SKIP_CHECK_PENDING_VSERROR will be
+             * set to cpu_hwcaps. This means we can use the alternative to skip
+             * synchronizing SErrors for other SErrors handle options.
+             */
+            SYNCHRONIZE_SERROR(SKIP_CHECK_PENDING_VSERROR);
+
             return;
         }
         local_irq_enable();
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v3 19/19] xen/arm: Handle guest external abort as guest SError
  2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
                   ` (17 preceding siblings ...)
  2017-03-31 13:07 ` [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest Wei Chen
@ 2017-03-31 13:07 ` Wei Chen
  18 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-03-31 13:07 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, wei.chen, steve.capper, Kaly.Xin, julien.grall, nd

The guest generated external data/instruction aborts can be treated
as guest SErrors. We already have a handler to handle the SErrors,
so we can reuse this handler to handle guest external aborts.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/traps.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 8f1a0cd..9e8719c 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2562,12 +2562,12 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
 
     /*
      * If this bit has been set, it means that this instruction abort is caused
-     * by a guest external abort. Currently we crash the guest to protect the
-     * hypervisor. In future one can better handle this by injecting a virtual
-     * abort to the guest.
+     * by a guest external abort. We can handle this instruction abort as guest
+     * SError.
      */
     if ( hsr.iabt.eat )
-        domain_crash_synchronous();
+        return __do_trap_serror(regs, true);
+
 
     if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
         gpa = get_faulting_ipa(gva);
@@ -2665,12 +2665,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     /*
      * If this bit has been set, it means that this data abort is caused
-     * by a guest external abort. Currently we crash the guest to protect the
-     * hypervisor. In future one can better handle this by injecting a virtual
-     * abort to the guest.
+     * by a guest external abort. We treat this data abort as guest SError.
      */
     if ( dabt.eat )
-        domain_crash_synchronous();
+        return __do_trap_serror(regs, true);
 
     info.dabt = dabt;
 #ifdef CONFIG_ARM_32
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check
  2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
@ 2017-03-31 14:08   ` Julien Grall
  2017-03-31 18:26   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:08 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> Xen will do exception syndrome check while some types of exception
> take place in EL2. The syndrome check code read the ESR_EL2 register
> directly, but in some situation this register maybe overridden by
> nested exception.
>
> For example, if we re-enable IRQ before reading ESR_EL2 which means
> Xen may enter in IRQ exception mode and return the processor with
> clobbered ESR_EL2 (See ARM ARM DDI 0487A.j D7.2.25)
>
> In this case the guest exception syndrome has been overridden, we will
> check the syndrome for guest sync exception with an incorrect ESR_EL2
> value. So we want to save ESR_EL2 to cpu_user_regs as soon as the
> exception takes place in EL2 to avoid using an incorrect syndrome value.
>
> In order to save ESR_EL2, we added a 32-bit member hsr to cpu_user_regs.
> But while saving registers in trap entry, we use stp to save ELR and
> CPSR at the same time through 64-bit general registers. If we keep this
> code, the hsr will be overridden by upper 32-bit of CPSR. So adjust the
> code to use str to save ELR in a separate instruction and use stp to
> save CPSR and HSR at the same time through 32-bit general registers.
> This change affects the registers restore in trap exit, we can't use the
> ldp to restore ELR and CPSR from stack at the same time. We have to use
> ldr to restore them separately.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags
  2017-03-31 13:07 ` [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags Wei Chen
@ 2017-03-31 14:10   ` Julien Grall
  2017-03-31 18:29   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:10 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> We want to add HCR_EL2 register to Xen context switch. And each copy
> of HCR_EL2 in vcpu structure will be initialized with the same set
> of trap flags as the HCR_EL2 register. We introduce a helper here to
> represent these flags to be reused easily.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately
  2017-03-31 13:07 ` [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately Wei Chen
@ 2017-03-31 14:11   ` Julien Grall
  2017-03-31 18:28   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:11 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> Different domains may have different HCR_EL2 flags. For example, the
> 64-bit domain needs HCR_RW flag but the 32-bit does not need it. So
> we give each domain a default HCR_EL2 value and save it in the vCPU's
> context.
>
> HCR_EL2 register has only one bit can be updated automatically without
> explicit write (HCR_VSE). But we haven't used this bit currently, so
> we can consider that the HCR_EL2 register will not be modified while
> the guest is running. So save the HCR_EL2 while guest exiting to
> hypervisor is not neccessary. We just have to restore this register for
> each vCPU while context switching.
>
> The p2m_restore_state which will be invoked in context switch progress
> has included the writing of HCR_EL2 already. It updates the HCR_EL2.RW
> bit to tell the hardware how to interpret the stage-1 page table as the
> encodings are different between AArch64 and AArch32. We can reuse this
> write to restore the HCR_EL2 for each vCPU. Of course, the value of each
> vCPU's HCR_EL2 should be adjusted to have proper HCR_EL2.RW bit in this
> function. In the later patch of this series, we will set the HCR_EL2.RW
> for each vCPU while the domain is creating.
>
> Signed-off-by: wei chen <Wei.Chen@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper
  2017-03-31 13:07 ` [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper Wei Chen
@ 2017-03-31 14:13   ` Julien Grall
  0 siblings, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:13 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> When guest triggers async aborts, in most platform, such aborts
> will be routed to hypervisor. But we don't want the hypervisor
> to handle such aborts, so we have to route such aborts back to
> the guest.
>
> This helper is using the HCR_EL2.VSE (HCR.VA for aarch32) bit to
> route such aborts back to the guest. After updating HCR_EL2.VSE bit
> in vCPU context, we write the value to HCR_EL2 immediately. In this
> case we don't need to move the restoration of HCR_EL2 to other place,
> and it worked regardless of whether we get preempted.
>
> If the guest PC had been advanced by SVC/HVC/SMC instructions before
> we caught the SError in hypervisor, we have to adjust the guest PC to
> exact address while the SError generated.
>
> About HSR_EC_SVC32/64, even thought we don't trap SVC32/64 today,
> we would like them to be handled here. This would be useful when
> VM introspection will gain support of SVC32/64 trapping.
>
> After updating HCR_EL2.VSE bit of vCPU HCR_EL2, write the value
> to HCR_EL2 immediately. In this case we don't need to move the
> restoration of HCR_EL2 to leave_hypervisor_tail, and it worked
> regardless of whether we get preempted.
>
> This helper will be used by the later patches in this series, we
> use #if 0 to disable it in this patch temporarily to remove the
> warning message of unused function from compiler.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

NIT: Reviewed-by implies Acked-by so you don't need to keep both :).

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled
  2017-03-31 13:07 ` [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled Wei Chen
@ 2017-03-31 14:25   ` Julien Grall
  2017-03-31 18:43   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:25 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> In previous patch, we have umasked the Abort/SError bit for Xen
> in most of its running time. So in some use-cases, we have to
> check whether the abort is enabled in current context. For example,
> while we want to synchronize SErrors, we have to confirm the abort
> is enabled. Otherwise synchronize SErrors is pointless.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-03-31 13:07 ` [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError Wei Chen
@ 2017-03-31 14:33   ` Julien Grall
  2017-04-05  7:14     ` Wei Chen
  2017-03-31 18:36   ` Stefano Stabellini
  1 sibling, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:33 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> In previous patches, we have provided the ability to synchronize
> SErrors in exception entries. But we haven't synchronized SErrors
> while returning to guest and doing context switch.
>
> So we still have two risks:
> 1. Slipping hypervisor SErrors to guest. For example, hypervisor
>    triggers a SError while returning to guest, but this SError may be
>    delivered after entering guest. In "DIVERSE" option, this SError
>    would be routed back to guest and panic the guest. But actually,
>    we should crash the whole system due to this hypervisor SError.
> 2. Slipping previous guest SErrors to the next guest. In "FORWARD"
>    option, if hypervisor triggers a SError while context switching.
>    This SError may be delivered after switching to next vCPU. In this
>    case, this SError will be forwarded to next vCPU and may panic
>    an incorrect guest.
>
> So we have have to introduce this macro to synchronize SErrors while
> returning to guest and doing context switch.
>
> We also added a barrier to this macro to prevent compiler reorder our
> asm volatile code.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> ---
> v2->v3:
> 1. Use a macro to replace function to synchronize SErrors.
> 2. Add barrier to avoid compiler reorder our code.
>
> Note:
> I had followed Julien's suggestion to add the Assert in to macro,
> But I found I always hit the Assert. Because when option == DIVERSE,
> the SKIP_CHECK_PENDING_VSERROR feature would not be set into cpu_hwcaps.
> And in the later patch, I will use this feature to skip synchronize
> SErrors before returning to guest.
> The cpus_has_cap(SKIP_CHECK_PENDING_VSERROR) will always false.
> And hit the ASSERT.
>
> And about the local_abort enable check, should we disable the abort
> before synchronizing SErrors while returning to guest or doing context
> switch? Just like in these two places we have disable the IRQ.
>
> For this testing, I have apply this series to latest staging tree.

Because the ASSERT I suggested was wrong, sorry for that. It should have 
been:

ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());

This is because we want abort enabled when the "feature" is not present.

This series looks good so far, so I would be happy if you send a 
follow-up patch to add the ASSERT rather than modifying this patch.

For this patch:

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs
  2017-03-31 13:07 ` [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs Wei Chen
@ 2017-03-31 14:38   ` Julien Grall
  2017-03-31 18:37     ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:38 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> If there is a pending SError while we are doing context switch, if the
> SError handle option is "FORWARD", We have to guranatee this serror to

NIT: s/guranatee/guarantee/

s/serror/Serror/

> be caught by current vCPU, otherwise it will be caught by next vCPU and
> be forwarded to this wrong vCPU.
>
> So we have to synchronize SError before switch to next vCPU. But this is
> only required by "FORWARD" option. In this case we added a new flag
> SKIP_CTXT_SWITCH_SERROR_SYNC in cpu_hwcaps to skip synchronizing SError
> in context switch for other options. In the meantime, we don't need to
> export serror_op accessing to other source files.
>
> Because we have umasked the Abort/SError bit in previous patch, we have
> to disable Abort/SError before doing context switch as we have done for
> IRQ.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>
> ---
> v2->v3:
> 1. Use the macro instead of the function to synchronize SErrors.
> 2. Disable Abort/SError before doing context switch.
> ---
>  xen/arch/arm/domain.c            | 12 ++++++++++++
>  xen/arch/arm/traps.c             |  3 +++
>  xen/include/asm-arm/cpufeature.h |  3 ++-
>  3 files changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 69c2854..e1a620a 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -29,6 +29,7 @@
>  #include <asm/cpufeature.h>
>  #include <asm/vfp.h>
>  #include <asm/procinfo.h>
> +#include <asm/alternative.h>
>
>  #include <asm/gic.h>
>  #include <asm/vgic.h>
> @@ -312,6 +313,17 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>
>      local_irq_disable();
>
> +    /*
> +     * If the SErrors option is "FORWARD", we have to prevent forwarding

SErrors is the option. So it should be in lowercase.

> +     * serror to wrong vCPU. So before context switch, we have to use the

NIT : s/serror/Serror/

> +     * synchronize_serror to guarantee that the pending serror would be

s/synchronize_serror/macro SYNCRONIZE_SERROR/

With the grammar nits fixed:

Reviewed-by: Julien Grall <julien.grall@arm.com>

Unless that are more comments from Stefano, this should be fixable 
whilst committing.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest
  2017-03-31 13:07 ` [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest Wei Chen
@ 2017-03-31 14:46   ` Julien Grall
  2017-03-31 18:42     ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:46 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> If there is a pending SError while we're returning from trap. If the
> SError handle option is "DIVERSE", we have to prevent slipping this
> hypervisor SError to guest. So we have to use the dsb/isb to guarantee
> that the pending hypervisor SError would be caught in hypervisor before
> return to guest.
>
> In previous patch, we will set SKIP_CHECK_PENDING_VSERROR to cpu_hwcaps
> when option is NOT "DIVERSE". This means we can use the alternative to
> skip synchronizing SErrors for other SErrors handle options.
>
> Because we have umasked the Abort/SError bit in previous patch. We have
> to disable the Abort/SError before returning to guest as we have done
> for IRQ.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>
> ---
> v2->v3:
> 1. Use alternative instead of check serror_op to skip sychronizing SErrors
>    while option is NOT "DIVERSE".
> 2. Disable Abort/SError before returning to guest.
> ---
>  xen/arch/arm/traps.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 2c610c4..8f1a0cd 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2936,6 +2936,19 @@ asmlinkage void leave_hypervisor_tail(void)
>          local_irq_disable();
>          if (!softirq_pending(smp_processor_id())) {
>              gic_inject();
> +
> +            /*
> +             * If the SErrors handle option is "DIVERSE", we have to prevent
> +             * slipping the hypervisor SError to guest. In this option, before
> +             * returning from trap, we have to synchronize SErrors to guarantee
> +             * that the pending SError would be caught in hypervisor.
> +             *
> +             * If option is NOT "DIVERSE", SKIP_CHECK_PENDING_VSERROR will be
> +             * set to cpu_hwcaps. This means we can use the alternative to skip
> +             * synchronizing SErrors for other SErrors handle options.
> +             */
> +            SYNCHRONIZE_SERROR(SKIP_CHECK_PENDING_VSERROR);

The feature name SKIP_CHECK_PENDING_VSERROR does not make sense here 
because we are not synchronizing guest SError.

I am happy to re-use the same feature, but this would need to rename. 
Maybe SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT?

> +
>              return;
>          }
>          local_irq_enable();
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
  2017-03-31 13:07 ` [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op Wei Chen
@ 2017-03-31 14:48   ` Julien Grall
  2017-04-05  6:36     ` Wei Chen
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-03-31 14:48 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 31/03/17 14:07, Wei Chen wrote:
> In the later patches of this series, we want to use the alternative
> patching framework to avoid checking serror_op in every entries.
> So we define a new cpu feature "SKIP_CHECK_PENDING_VSERROR" for
> serror_op. When serror_op is not equal to SERROR_DIVERSE, this
> feature will be set to cpu_hwcaps.
>
> Currently, the default serror_op is SERROR_DIVERSE, if we want to
> change the serror_op value we have to place the serror parameter
> in command line. It seems no problem to update cpu_hwcaps directly
> in the serror parameter parsing function.
>
> While the default option will be diverse today, this may change in the
> future. So we introduce this initcall to guarantee the cpu_hwcaps can be
> updated no matter the serror parameter is placed in the command line
> or not.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> Acked-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v2->v3
> 1. Rewrite the commit message to make it easer to understand.
> 2. Add Julien's Acked-by tag and Stefano's Reviewed-by tag.
> ---
>  xen/arch/arm/traps.c             | 9 +++++++++
>  xen/include/asm-arm/cpufeature.h | 3 ++-
>  2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 76cda59..9d4ee39 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -141,6 +141,15 @@ static void __init parse_serrors_behavior(const char *str)
>  }
>  custom_param("serrors", parse_serrors_behavior);
>
> +static int __init update_serrors_cpu_caps(void)
> +{
> +    if ( serrors_op != SERRORS_DIVERSE )
> +        cpus_set_cap(SKIP_CHECK_PENDING_VSERROR);

I have some comment regarding the name of the feature (see patch #18).

Cheers,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check
  2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
  2017-03-31 14:08   ` Julien Grall
@ 2017-03-31 18:26   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:26 UTC (permalink / raw)
  To: Wei Chen; +Cc: sstabellini, steve.capper, xen-devel, Kaly.Xin, julien.grall, nd

On Fri, 31 Mar 2017, Wei Chen wrote:
> Xen will do exception syndrome check while some types of exception
> take place in EL2. The syndrome check code read the ESR_EL2 register
> directly, but in some situation this register maybe overridden by
> nested exception.
> 
> For example, if we re-enable IRQ before reading ESR_EL2 which means
> Xen may enter in IRQ exception mode and return the processor with
> clobbered ESR_EL2 (See ARM ARM DDI 0487A.j D7.2.25)
> 
> In this case the guest exception syndrome has been overridden, we will
> check the syndrome for guest sync exception with an incorrect ESR_EL2
> value. So we want to save ESR_EL2 to cpu_user_regs as soon as the
> exception takes place in EL2 to avoid using an incorrect syndrome value.
> 
> In order to save ESR_EL2, we added a 32-bit member hsr to cpu_user_regs.
> But while saving registers in trap entry, we use stp to save ELR and
> CPSR at the same time through 64-bit general registers. If we keep this
> code, the hsr will be overridden by upper 32-bit of CPSR. So adjust the
> code to use str to save ELR in a separate instruction and use stp to
> save CPSR and HSR at the same time through 32-bit general registers.
> This change affects the registers restore in trap exit, we can't use the
> ldp to restore ELR and CPSR from stack at the same time. We have to use
> ldr to restore them separately.
> 
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Note:
> This patch is a bug fix, this bug affects the 4.8 and 4.7 source trees
> too.
> 
> v2->v3:
> 1. Add note to the commit message.
> 2. Read ESR_EL2 value from vCPU context instead of reading from register
>    directly in the places where use the ESR_EL2 value.
> ---
>  xen/arch/arm/arm32/asm-offsets.c      |  1 +
>  xen/arch/arm/arm32/entry.S            |  3 +++
>  xen/arch/arm/arm64/asm-offsets.c      |  1 +
>  xen/arch/arm/arm64/entry.S            | 13 +++++++++----
>  xen/arch/arm/arm64/traps.c            |  2 +-
>  xen/arch/arm/traps.c                  |  4 ++--
>  xen/include/asm-arm/arm32/processor.h |  2 +-
>  xen/include/asm-arm/arm64/processor.h |  3 +--
>  8 files changed, 19 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> index f8e6b53..5b543ab 100644
> --- a/xen/arch/arm/arm32/asm-offsets.c
> +++ b/xen/arch/arm/arm32/asm-offsets.c
> @@ -26,6 +26,7 @@ void __dummy__(void)
>     OFFSET(UREGS_lr, struct cpu_user_regs, lr);
>     OFFSET(UREGS_pc, struct cpu_user_regs, pc);
>     OFFSET(UREGS_cpsr, struct cpu_user_regs, cpsr);
> +   OFFSET(UREGS_hsr, struct cpu_user_regs, hsr);
>  
>     OFFSET(UREGS_LR_usr, struct cpu_user_regs, lr_usr);
>     OFFSET(UREGS_SP_usr, struct cpu_user_regs, sp_usr);
> diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
> index 2a6f4f0..2187226 100644
> --- a/xen/arch/arm/arm32/entry.S
> +++ b/xen/arch/arm/arm32/entry.S
> @@ -23,6 +23,9 @@
>          add r11, sp, #UREGS_kernel_sizeof+4;                            \
>          str r11, [sp, #UREGS_sp];                                       \
>                                                                          \
> +        mrc CP32(r11, HSR);             /* Save exception syndrome */   \
> +        str r11, [sp, #UREGS_hsr];                                      \
> +                                                                        \
>          mrs r11, SPSR_hyp;                                              \
>          str r11, [sp, #UREGS_cpsr];                                     \
>          and r11, #PSR_MODE_MASK;                                        \
> diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
> index 69ea92a..ce24e44 100644
> --- a/xen/arch/arm/arm64/asm-offsets.c
> +++ b/xen/arch/arm/arm64/asm-offsets.c
> @@ -27,6 +27,7 @@ void __dummy__(void)
>     OFFSET(UREGS_SP, struct cpu_user_regs, sp);
>     OFFSET(UREGS_PC, struct cpu_user_regs, pc);
>     OFFSET(UREGS_CPSR, struct cpu_user_regs, cpsr);
> +   OFFSET(UREGS_ESR_el2, struct cpu_user_regs, hsr);
>  
>     OFFSET(UREGS_SPSR_el1, struct cpu_user_regs, spsr_el1);
>  
> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> index c181b5e..02802c0 100644
> --- a/xen/arch/arm/arm64/entry.S
> +++ b/xen/arch/arm/arm64/entry.S
> @@ -121,9 +121,13 @@ lr      .req    x30             // link register
>  
>          stp     lr, x21, [sp, #UREGS_LR]
>  
> -        mrs     x22, elr_el2
> -        mrs     x23, spsr_el2
> -        stp     x22, x23, [sp, #UREGS_PC]
> +        mrs     x21, elr_el2
> +        str     x21, [sp, #UREGS_PC]
> +
> +        add     x21, sp, #UREGS_CPSR
> +        mrs     x22, spsr_el2
> +        mrs     x23, esr_el2
> +        stp     w22, w23, [x21]
>  
>          .endm
>  
> @@ -307,7 +311,8 @@ ENTRY(return_to_new_vcpu64)
>  return_from_trap:
>          msr     daifset, #2 /* Mask interrupts */
>  
> -        ldp     x21, x22, [sp, #UREGS_PC]       // load ELR, SPSR
> +        ldr     x21, [sp, #UREGS_PC]            // load ELR
> +        ldr     w22, [sp, #UREGS_CPSR]          // load SPSR
>  
>          pop     x0, x1
>          pop     x2, x3
> diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
> index 8e89376..36b3a30 100644
> --- a/xen/arch/arm/arm64/traps.c
> +++ b/xen/arch/arm/arm64/traps.c
> @@ -32,7 +32,7 @@ static const char *handler[]= {
>  
>  asmlinkage void do_bad_mode(struct cpu_user_regs *regs, int reason)
>  {
> -    union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
> +    union hsr hsr = { .bits = regs->hsr };
>  
>      printk("Bad mode in %s handler detected\n", handler[reason]);
>      printk("ESR=0x%08"PRIx32":  EC=%"PRIx32", IL=%"PRIx32", ISS=%"PRIx32"\n",
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 614501f..6137272 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -843,7 +843,7 @@ static void _show_registers(struct cpu_user_regs *regs,
>      printk("   HCR_EL2: %016"PRIregister"\n", READ_SYSREG(HCR_EL2));
>      printk(" TTBR0_EL2: %016"PRIx64"\n", READ_SYSREG64(TTBR0_EL2));
>      printk("\n");
> -    printk("   ESR_EL2: %08"PRIx32"\n", READ_SYSREG32(ESR_EL2));
> +    printk("   ESR_EL2: %08"PRIx32"\n", regs->hsr);
>      printk(" HPFAR_EL2: %016"PRIregister"\n", READ_SYSREG(HPFAR_EL2));
>  
>  #ifdef CONFIG_ARM_32
> @@ -2641,7 +2641,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>  
>  asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
>  {
> -    const union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
> +    const union hsr hsr = { .bits = regs->hsr };
>  
>      enter_hypervisor_head(regs);
>  
> diff --git a/xen/include/asm-arm/arm32/processor.h b/xen/include/asm-arm/arm32/processor.h
> index db3b17b..f6d5df3 100644
> --- a/xen/include/asm-arm/arm32/processor.h
> +++ b/xen/include/asm-arm/arm32/processor.h
> @@ -37,7 +37,7 @@ struct cpu_user_regs
>          uint32_t pc, pc32;
>      };
>      uint32_t cpsr; /* Return mode */
> -    uint32_t pad0; /* Doubleword-align the kernel half of the frame */
> +    uint32_t hsr;  /* Exception Syndrome */
>  
>      /* Outer guest frame only from here on... */
>  
> diff --git a/xen/include/asm-arm/arm64/processor.h b/xen/include/asm-arm/arm64/processor.h
> index b0726ff..24f836b 100644
> --- a/xen/include/asm-arm/arm64/processor.h
> +++ b/xen/include/asm-arm/arm64/processor.h
> @@ -66,8 +66,7 @@ struct cpu_user_regs
>      /* Return address and mode */
>      __DECL_REG(pc,           pc32);             /* ELR_EL2 */
>      uint32_t cpsr;                              /* SPSR_EL2 */
> -
> -    uint32_t pad0; /* Align end of kernel frame. */
> +    uint32_t hsr;                               /* ESR_EL2 */
>  
>      /* Outer guest frame only from here on... */
>  
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately
  2017-03-31 13:07 ` [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately Wei Chen
  2017-03-31 14:11   ` Julien Grall
@ 2017-03-31 18:28   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:28 UTC (permalink / raw)
  To: Wei Chen; +Cc: sstabellini, steve.capper, xen-devel, Kaly.Xin, julien.grall, nd

On Fri, 31 Mar 2017, Wei Chen wrote:
> Different domains may have different HCR_EL2 flags. For example, the
> 64-bit domain needs HCR_RW flag but the 32-bit does not need it. So
> we give each domain a default HCR_EL2 value and save it in the vCPU's
> context.
> 
> HCR_EL2 register has only one bit can be updated automatically without
> explicit write (HCR_VSE). But we haven't used this bit currently, so
> we can consider that the HCR_EL2 register will not be modified while
> the guest is running. So save the HCR_EL2 while guest exiting to
> hypervisor is not neccessary. We just have to restore this register for
> each vCPU while context switching.
> 
> The p2m_restore_state which will be invoked in context switch progress
> has included the writing of HCR_EL2 already. It updates the HCR_EL2.RW
> bit to tell the hardware how to interpret the stage-1 page table as the
> encodings are different between AArch64 and AArch32. We can reuse this
> write to restore the HCR_EL2 for each vCPU. Of course, the value of each
> vCPU's HCR_EL2 should be adjusted to have proper HCR_EL2.RW bit in this
> function. In the later patch of this series, we will set the HCR_EL2.RW
> for each vCPU while the domain is creating.
> 
> Signed-off-by: wei chen <Wei.Chen@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> v2->v3:
> 1. Squash #2 #3 patchs of v2 into this patch.
> 2. Remove HCR_EL2 initialization of idle vCPU, it would be used.
> 3. Revert the change of vwfi.
> 4. Use the helper that introduced in previous patch to initialize
>    HCR_EL2 for each vCPU.
> 5. Restore the initialization of HCR_EL2 in init_traps.
> ---
>  xen/arch/arm/domain.c        | 2 ++
>  xen/arch/arm/p2m.c           | 9 +++------
>  xen/include/asm-arm/domain.h | 3 +++
>  3 files changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index bb327da..5d18bb0 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -513,6 +513,8 @@ int vcpu_initialise(struct vcpu *v)
>  
>      v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
>  
> +    v->arch.hcr_el2 = get_default_hcr_flags();
> +
>      processor_vcpu_initialise(v);
>  
>      if ( (rc = vcpu_vgic_init(v)) != 0 )
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 6263760..83c4b7d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -128,27 +128,24 @@ void p2m_save_state(struct vcpu *p)
>  
>  void p2m_restore_state(struct vcpu *n)
>  {
> -    register_t hcr;
>      struct p2m_domain *p2m = &n->domain->arch.p2m;
>      uint8_t *last_vcpu_ran;
>  
>      if ( is_idle_vcpu(n) )
>          return;
>  
> -    hcr = READ_SYSREG(HCR_EL2);
> -
>      WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>      isb();
>  
>      if ( is_32bit_domain(n->domain) )
> -        hcr &= ~HCR_RW;
> +        n->arch.hcr_el2 &= ~HCR_RW;
>      else
> -        hcr |= HCR_RW;
> +        n->arch.hcr_el2 |= HCR_RW;
>  
>      WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1);
>      isb();
>  
> -    WRITE_SYSREG(hcr, HCR_EL2);
> +    WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2);
>      isb();
>  
>      last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()];
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 09fe502..7b1dacc 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -204,6 +204,9 @@ struct arch_vcpu
>      register_t tpidr_el1;
>      register_t tpidrro_el0;
>  
> +    /* HYP configuration */
> +    register_t hcr_el2;
> +
>      uint32_t teecr, teehbr; /* ThumbEE, 32-bit guests only */
>  #ifdef CONFIG_ARM_32
>      /*
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags
  2017-03-31 13:07 ` [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags Wei Chen
  2017-03-31 14:10   ` Julien Grall
@ 2017-03-31 18:29   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:29 UTC (permalink / raw)
  To: Wei Chen; +Cc: sstabellini, steve.capper, xen-devel, Kaly.Xin, julien.grall, nd

On Fri, 31 Mar 2017, Wei Chen wrote:
> We want to add HCR_EL2 register to Xen context switch. And each copy
> of HCR_EL2 in vcpu structure will be initialized with the same set
> of trap flags as the HCR_EL2 register. We introduce a helper here to
> represent these flags to be reused easily.
> 
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/traps.c            | 11 ++++++++---
>  xen/include/asm-arm/processor.h |  2 ++
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 6137272..e6d88f2 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -115,6 +115,13 @@ static void __init parse_vwfi(const char *s)
>  }
>  custom_param("vwfi", parse_vwfi);
>  
> +register_t get_default_hcr_flags(void)
> +{
> +    return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
> +             (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> +             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB);
> +}
> +
>  void init_traps(void)
>  {
>      /* Setup Hyp vector base */
> @@ -139,9 +146,7 @@ void init_traps(void)
>                   CPTR_EL2);
>  
>      /* Setup hypervisor traps */
> -    WRITE_SYSREG(HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
> -                 (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> -                 HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB,HCR_EL2);
> +    WRITE_SYSREG(get_default_hcr_flags(), HCR_EL2);
>      isb();
>  }
>  
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index afc0e9a..4b6338b 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -708,6 +708,8 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1,
>  
>  void do_trap_guest_error(struct cpu_user_regs *regs);
>  
> +register_t get_default_hcr_flags(void);
> +
>  #endif /* __ASSEMBLY__ */
>  #endif /* __ASM_ARM_PROCESSOR_H */
>  /*
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-03-31 13:07 ` [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError Wei Chen
  2017-03-31 14:33   ` Julien Grall
@ 2017-03-31 18:36   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:36 UTC (permalink / raw)
  To: Wei Chen; +Cc: sstabellini, steve.capper, xen-devel, Kaly.Xin, julien.grall, nd

On Fri, 31 Mar 2017, Wei Chen wrote:
> In previous patches, we have provided the ability to synchronize
> SErrors in exception entries. But we haven't synchronized SErrors
> while returning to guest and doing context switch.
> 
> So we still have two risks:
> 1. Slipping hypervisor SErrors to guest. For example, hypervisor
>    triggers a SError while returning to guest, but this SError may be
>    delivered after entering guest. In "DIVERSE" option, this SError
>    would be routed back to guest and panic the guest. But actually,
>    we should crash the whole system due to this hypervisor SError.
> 2. Slipping previous guest SErrors to the next guest. In "FORWARD"
>    option, if hypervisor triggers a SError while context switching.
>    This SError may be delivered after switching to next vCPU. In this
>    case, this SError will be forwarded to next vCPU and may panic
>    an incorrect guest.
> 
> So we have have to introduce this macro to synchronize SErrors while
> returning to guest and doing context switch.
> 
> We also added a barrier to this macro to prevent compiler reorder our
> asm volatile code.
> 
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2->v3:
> 1. Use a macro to replace function to synchronize SErrors.
> 2. Add barrier to avoid compiler reorder our code.
> 
> Note:
> I had followed Julien's suggestion to add the Assert in to macro,
> But I found I always hit the Assert. Because when option == DIVERSE,
> the SKIP_CHECK_PENDING_VSERROR feature would not be set into cpu_hwcaps.
> And in the later patch, I will use this feature to skip synchronize
> SErrors before returning to guest.
> The cpus_has_cap(SKIP_CHECK_PENDING_VSERROR) will always false.
> And hit the ASSERT.
> 
> And about the local_abort enable check, should we disable the abort
> before synchronizing SErrors while returning to guest or doing context
> switch? Just like in these two places we have disable the IRQ.
> 
> For this testing, I have apply this series to latest staging tree.
> 
> ...
> (XEN) Command line: console=dtuart dtuart=serial0 conswitch=x loglvl=all
> dom0_mem=8G dom0_max_vcpus=8 serrors=diverse
> (XEN) Placing Xen at 0x00000083fee00000-0x00000083ff000000
> ...
> (XEN) ----SYNCHRONIZE_SERROR ASSERT 0 1
> (XEN) Assertion 'cpus_have_cap(5) && local_abort_is_enabled()' failed at
> traps.c:2954
> (XEN) ----[ Xen-4.9-unstable  arm64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) PC:     000000000025c09c leave_hypervisor_tail+0xa8/0x100
> (XEN) LR:     000000000025c078
> (XEN) SP:     00008003fac07e80
> (XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)      X0: 000000000000005a  X1: 00000000ffffffff  X2:
> 0000000000000000
> (XEN)      X3: 0000000000000000  X4: 0000000000000010  X5:
> 0000000000000000
> (XEN)      X6: 00008003ffe50000  X7: 0000000000000001  X8:
> 00000000fffffffd
> (XEN)      X9: 000000000000000a X10: 0000000000000031 X11:
> 00008003fac07bf8
> (XEN)     X12: 0000000000000001 X13: 000000000026c370 X14:
> 0000000000000020
> (XEN)     X15: 0000000000000000 X16: 00000083fff42fc0 X17:
> 00000000fffffffe
> (XEN)     X18: 0000000000000000 X19: 0000000000292c58 X20:
> 0000000000290028
> (XEN)     X21: 00000000002ea000 X22: 0000000000000000 X23:
> 0000000000000000
> (XEN)     X24: 0000000000000000 X25: 0000000000000000 X26:
> 0000000000000000
> (XEN)     X27: 0000000000000000 X28: 0000000000000000  FP:
> 00008003fac07e80
> (XEN)
> (XEN)   VTCR_EL2: 80043594
> (XEN)  VTTBR_EL2: 00010083fd036000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd183d
> (XEN)    HCR_EL2: 000000008038663f
> (XEN)  TTBR0_EL2: 00000083fef0e000
> (XEN)
> (XEN)    ESR_EL2: f2000001
> (XEN)  HPFAR_EL2: 0000000000000000
> (XEN)    FAR_EL2: 0000000000000000
> (XEN)
> (XEN) Xen stack trace from sp=00008003fac07e80:
> (XEN)    00008003fac07ea0 0000000000262934 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000249cac 0000008048000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000008040080000
> 00000000000001c5
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<000000000025c09c>] leave_hypervisor_tail+0xa8/0x100 (PC)
> (XEN)    [<000000000025c078>] leave_hypervisor_tail+0x84/0x100 (LR)
> (XEN)    [<0000000000262934>] return_to_new_vcpu64+0x4/0x30
> (XEN)    [<0000000000249cac>] domain.c#continue_new_vcpu+0/0xa4
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Assertion 'cpus_have_cap(5) && local_abort_is_enabled()' failed at
> traps.c:2954
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> ---
>  xen/include/asm-arm/processor.h | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index bb24bee..a787d1b 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -723,6 +723,17 @@ void abort_guest_exit_end(void);
>      ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
>  )
>  
> +/*
> + * Synchronize SError unless the feature is selected.
> + * This is relying on the SErrors are currently unmasked.
> + */
> +#define SYNCHRONIZE_SERROR(feat)                                 \
> +    do {                                                         \
> +        asm volatile(ALTERNATIVE("dsb sy; isb",                  \
> +                                 "nop; nop", feat)               \
> +                                 : : : "memory");                \
> +    } while (0)
> +
>  #endif /* __ASSEMBLY__ */
>  #endif /* __ASM_ARM_PROCESSOR_H */
>  /*
> -- 
> 2.7.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs
  2017-03-31 14:38   ` Julien Grall
@ 2017-03-31 18:37     ` Stefano Stabellini
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:37 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, Wei Chen, steve.capper, xen-devel, Kaly.Xin, nd

On Fri, 31 Mar 2017, Julien Grall wrote:
> Hi Wei,
> 
> On 31/03/17 14:07, Wei Chen wrote:
> > If there is a pending SError while we are doing context switch, if the
> > SError handle option is "FORWARD", We have to guranatee this serror to
> 
> NIT: s/guranatee/guarantee/
> 
> s/serror/Serror/
> 
> > be caught by current vCPU, otherwise it will be caught by next vCPU and
> > be forwarded to this wrong vCPU.
> > 
> > So we have to synchronize SError before switch to next vCPU. But this is
> > only required by "FORWARD" option. In this case we added a new flag
> > SKIP_CTXT_SWITCH_SERROR_SYNC in cpu_hwcaps to skip synchronizing SError
> > in context switch for other options. In the meantime, we don't need to
> > export serror_op accessing to other source files.
> > 
> > Because we have umasked the Abort/SError bit in previous patch, we have
> > to disable Abort/SError before doing context switch as we have done for
> > IRQ.
> > 
> > Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> > 
> > ---
> > v2->v3:
> > 1. Use the macro instead of the function to synchronize SErrors.
> > 2. Disable Abort/SError before doing context switch.
> > ---
> >  xen/arch/arm/domain.c            | 12 ++++++++++++
> >  xen/arch/arm/traps.c             |  3 +++
> >  xen/include/asm-arm/cpufeature.h |  3 ++-
> >  3 files changed, 17 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 69c2854..e1a620a 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -29,6 +29,7 @@
> >  #include <asm/cpufeature.h>
> >  #include <asm/vfp.h>
> >  #include <asm/procinfo.h>
> > +#include <asm/alternative.h>
> > 
> >  #include <asm/gic.h>
> >  #include <asm/vgic.h>
> > @@ -312,6 +313,17 @@ void context_switch(struct vcpu *prev, struct vcpu
> > *next)
> > 
> >      local_irq_disable();
> > 
> > +    /*
> > +     * If the SErrors option is "FORWARD", we have to prevent forwarding
> 
> SErrors is the option. So it should be in lowercase.
> 
> > +     * serror to wrong vCPU. So before context switch, we have to use the
> 
> NIT : s/serror/Serror/
> 
> > +     * synchronize_serror to guarantee that the pending serror would be
> 
> s/synchronize_serror/macro SYNCRONIZE_SERROR/
> 
> With the grammar nits fixed:
> 
> Reviewed-by: Julien Grall <julien.grall@arm.com>
> 
> Unless that are more comments from Stefano, this should be fixable whilst
> committing.

Yes, it is true, but I prefer for Wei to resend the series with all the
Reviewed-by and little remaming nits. Less work for me :-)

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest
  2017-03-31 14:46   ` Julien Grall
@ 2017-03-31 18:42     ` Stefano Stabellini
  2017-03-31 18:43       ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:42 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, Wei Chen, steve.capper, xen-devel, Kaly.Xin, nd

On Fri, 31 Mar 2017, Julien Grall wrote:
> Hi Wei,
> 
> On 31/03/17 14:07, Wei Chen wrote:
> > If there is a pending SError while we're returning from trap. If the
> > SError handle option is "DIVERSE", we have to prevent slipping this
> > hypervisor SError to guest. So we have to use the dsb/isb to guarantee
> > that the pending hypervisor SError would be caught in hypervisor before
> > return to guest.
> > 
> > In previous patch, we will set SKIP_CHECK_PENDING_VSERROR to cpu_hwcaps
> > when option is NOT "DIVERSE". This means we can use the alternative to
> > skip synchronizing SErrors for other SErrors handle options.
> > 
> > Because we have umasked the Abort/SError bit in previous patch. We have
> > to disable the Abort/SError before returning to guest as we have done
> > for IRQ.
> > 
> > Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> > 
> > ---
> > v2->v3:
> > 1. Use alternative instead of check serror_op to skip sychronizing SErrors
> >    while option is NOT "DIVERSE".
> > 2. Disable Abort/SError before returning to guest.
> > ---
> >  xen/arch/arm/traps.c | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 2c610c4..8f1a0cd 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -2936,6 +2936,19 @@ asmlinkage void leave_hypervisor_tail(void)
> >          local_irq_disable();
> >          if (!softirq_pending(smp_processor_id())) {
> >              gic_inject();
> > +
> > +            /*
> > +             * If the SErrors handle option is "DIVERSE", we have to
> > prevent
> > +             * slipping the hypervisor SError to guest. In this option,
> > before
> > +             * returning from trap, we have to synchronize SErrors to
> > guarantee
> > +             * that the pending SError would be caught in hypervisor.
> > +             *
> > +             * If option is NOT "DIVERSE", SKIP_CHECK_PENDING_VSERROR will
> > be
> > +             * set to cpu_hwcaps. This means we can use the alternative to
> > skip
> > +             * synchronizing SErrors for other SErrors handle options.
> > +             */
> > +            SYNCHRONIZE_SERROR(SKIP_CHECK_PENDING_VSERROR);
> 
> The feature name SKIP_CHECK_PENDING_VSERROR does not make sense here because
> we are not synchronizing guest SError.
> 
> I am happy to re-use the same feature, but this would need to rename. Maybe
> SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT?

Just for clarity, you are suggest to use the same cpu_hwcaps feature
here and in patch #8, as Wei is doing, but we a different name. That's
fine by me.


> > +
> >              return;
> >          }
> >          local_irq_enable();
> > 
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest
  2017-03-31 18:42     ` Stefano Stabellini
@ 2017-03-31 18:43       ` Julien Grall
  2017-04-05  7:15         ` Wei Chen
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-03-31 18:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Kaly.Xin, nd, steve.capper, Wei Chen, xen-devel

Hi Stefano,

On 03/31/2017 07:42 PM, Stefano Stabellini wrote:
> On Fri, 31 Mar 2017, Julien Grall wrote:
>> Hi Wei,
>>
>> On 31/03/17 14:07, Wei Chen wrote:
>>> If there is a pending SError while we're returning from trap. If the
>>> SError handle option is "DIVERSE", we have to prevent slipping this
>>> hypervisor SError to guest. So we have to use the dsb/isb to guarantee
>>> that the pending hypervisor SError would be caught in hypervisor before
>>> return to guest.
>>>
>>> In previous patch, we will set SKIP_CHECK_PENDING_VSERROR to cpu_hwcaps
>>> when option is NOT "DIVERSE". This means we can use the alternative to
>>> skip synchronizing SErrors for other SErrors handle options.
>>>
>>> Because we have umasked the Abort/SError bit in previous patch. We have
>>> to disable the Abort/SError before returning to guest as we have done
>>> for IRQ.
>>>
>>> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>>>
>>> ---
>>> v2->v3:
>>> 1. Use alternative instead of check serror_op to skip sychronizing SErrors
>>>    while option is NOT "DIVERSE".
>>> 2. Disable Abort/SError before returning to guest.
>>> ---
>>>  xen/arch/arm/traps.c | 13 +++++++++++++
>>>  1 file changed, 13 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index 2c610c4..8f1a0cd 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>> @@ -2936,6 +2936,19 @@ asmlinkage void leave_hypervisor_tail(void)
>>>          local_irq_disable();
>>>          if (!softirq_pending(smp_processor_id())) {
>>>              gic_inject();
>>> +
>>> +            /*
>>> +             * If the SErrors handle option is "DIVERSE", we have to
>>> prevent
>>> +             * slipping the hypervisor SError to guest. In this option,
>>> before
>>> +             * returning from trap, we have to synchronize SErrors to
>>> guarantee
>>> +             * that the pending SError would be caught in hypervisor.
>>> +             *
>>> +             * If option is NOT "DIVERSE", SKIP_CHECK_PENDING_VSERROR will
>>> be
>>> +             * set to cpu_hwcaps. This means we can use the alternative to
>>> skip
>>> +             * synchronizing SErrors for other SErrors handle options.
>>> +             */
>>> +            SYNCHRONIZE_SERROR(SKIP_CHECK_PENDING_VSERROR);
>>
>> The feature name SKIP_CHECK_PENDING_VSERROR does not make sense here because
>> we are not synchronizing guest SError.
>>
>> I am happy to re-use the same feature, but this would need to rename. Maybe
>> SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT?
>
> Just for clarity, you are suggest to use the same cpu_hwcaps feature
> here and in patch #8, as Wei is doing, but we a different name. That's
> fine by me.

That's correct. I thought the name "SKIP_CHECK_PENDING_VSERROR" does not 
match the behavior here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled
  2017-03-31 13:07 ` [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled Wei Chen
  2017-03-31 14:25   ` Julien Grall
@ 2017-03-31 18:43   ` Stefano Stabellini
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2017-03-31 18:43 UTC (permalink / raw)
  To: Wei Chen; +Cc: sstabellini, steve.capper, xen-devel, Kaly.Xin, julien.grall, nd

On Fri, 31 Mar 2017, Wei Chen wrote:
> In previous patch, we have umasked the Abort/SError bit for Xen
> in most of its running time. So in some use-cases, we have to
> check whether the abort is enabled in current context. For example,
> while we want to synchronize SErrors, we have to confirm the abort
> is enabled. Otherwise synchronize SErrors is pointless.
> 
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> ---
>  xen/include/asm-arm/arm32/system.h | 7 +++++++
>  xen/include/asm-arm/arm64/system.h | 7 +++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
> index c617b40..6c5b9f5 100644
> --- a/xen/include/asm-arm/arm32/system.h
> +++ b/xen/include/asm-arm/arm32/system.h
> @@ -35,6 +35,13 @@ static inline int local_irq_is_enabled(void)
>      return !(flags & PSR_IRQ_MASK);
>  }
>  
> +static inline int local_abort_is_enabled(void)
> +{
> +    unsigned long flags;
> +    local_save_flags(flags);
> +    return !(flags & PSR_ABT_MASK);
> +}
> +
>  #define local_fiq_enable()  __asm__("cpsie f   @ __stf\n" : : : "memory", "cc")
>  #define local_fiq_disable() __asm__("cpsid f   @ __clf\n" : : : "memory", "cc")
>  
> diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
> index 2e2ee21..309485f 100644
> --- a/xen/include/asm-arm/arm64/system.h
> +++ b/xen/include/asm-arm/arm64/system.h
> @@ -58,6 +58,13 @@ static inline int local_fiq_is_enabled(void)
>      return !(flags & PSR_FIQ_MASK);
>  }
>  
> +static inline int local_abort_is_enabled(void)
> +{
> +    unsigned long flags;
> +    local_save_flags(flags);
> +    return !(flags & PSR_ABT_MASK);
> +}

Given that the two functions are identical, please add just one to
xen/include/asm-arm/system.h, if possible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
  2017-03-31 14:48   ` Julien Grall
@ 2017-04-05  6:36     ` Wei Chen
  0 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-04-05  6:36 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Julien,

On 2017/3/31 22:48, Julien Grall wrote:
> Hi Wei,
> 
> On 31/03/17 14:07, Wei Chen wrote:
>> In the later patches of this series, we want to use the alternative
>> patching framework to avoid checking serror_op in every entries.
>> So we define a new cpu feature "SKIP_CHECK_PENDING_VSERROR" for
>> serror_op. When serror_op is not equal to SERROR_DIVERSE, this
>> feature will be set to cpu_hwcaps.
>>
>> Currently, the default serror_op is SERROR_DIVERSE, if we want to
>> change the serror_op value we have to place the serror parameter
>> in command line. It seems no problem to update cpu_hwcaps directly
>> in the serror parameter parsing function.
>>
>> While the default option will be diverse today, this may change in the
>> future. So we introduce this initcall to guarantee the cpu_hwcaps can be
>> updated no matter the serror parameter is placed in the command line
>> or not.
>>
>> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>> Acked-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>> ---
>> v2->v3
>> 1. Rewrite the commit message to make it easer to understand.
>> 2. Add Julien's Acked-by tag and Stefano's Reviewed-by tag.
>> ---
>>  xen/arch/arm/traps.c             | 9 +++++++++
>>  xen/include/asm-arm/cpufeature.h | 3 ++-
>>  2 files changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 76cda59..9d4ee39 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -141,6 +141,15 @@ static void __init parse_serrors_behavior(const 
>> char *str)
>>  }
>>  custom_param("serrors", parse_serrors_behavior);
>>
>> +static int __init update_serrors_cpu_caps(void)
>> +{
>> +    if ( serrors_op != SERRORS_DIVERSE )
>> +        cpus_set_cap(SKIP_CHECK_PENDING_VSERROR);
> 
> I have some comment regarding the name of the feature (see patch #18).
> 

I will update it and send a new version.

> Cheers,
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-03-31 14:33   ` Julien Grall
@ 2017-04-05  7:14     ` Wei Chen
  2017-04-05  7:29       ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-04-05  7:14 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Julien,

On 2017/3/31 22:33, Julien Grall wrote:
> Hi Wei,
>
> On 31/03/17 14:07, Wei Chen wrote:
>> In previous patches, we have provided the ability to synchronize
>> SErrors in exception entries. But we haven't synchronized SErrors
>> while returning to guest and doing context switch.
>>
>> So we still have two risks:
>> 1. Slipping hypervisor SErrors to guest. For example, hypervisor
>>    triggers a SError while returning to guest, but this SError may be
>>    delivered after entering guest. In "DIVERSE" option, this SError
>>    would be routed back to guest and panic the guest. But actually,
>>    we should crash the whole system due to this hypervisor SError.
>> 2. Slipping previous guest SErrors to the next guest. In "FORWARD"
>>    option, if hypervisor triggers a SError while context switching.
>>    This SError may be delivered after switching to next vCPU. In this
>>    case, this SError will be forwarded to next vCPU and may panic
>>    an incorrect guest.
>>
>> So we have have to introduce this macro to synchronize SErrors while
>> returning to guest and doing context switch.
>>
>> We also added a barrier to this macro to prevent compiler reorder our
>> asm volatile code.
>>
>> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>> ---
>> v2->v3:
>> 1. Use a macro to replace function to synchronize SErrors.
>> 2. Add barrier to avoid compiler reorder our code.
>>
>> Note:
>> I had followed Julien's suggestion to add the Assert in to macro,
>> But I found I always hit the Assert. Because when option == DIVERSE,
>> the SKIP_CHECK_PENDING_VSERROR feature would not be set into cpu_hwcaps.
>> And in the later patch, I will use this feature to skip synchronize
>> SErrors before returning to guest.
>> The cpus_has_cap(SKIP_CHECK_PENDING_VSERROR) will always false.
>> And hit the ASSERT.
>>
>> And about the local_abort enable check, should we disable the abort
>> before synchronizing SErrors while returning to guest or doing context
>> switch? Just like in these two places we have disable the IRQ.
>>
>> For this testing, I have apply this series to latest staging tree.
>
> Because the ASSERT I suggested was wrong, sorry for that. It should have
> been:
>
> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>
> This is because we want abort enabled when the "feature" is not present.
>
> This series looks good so far, so I would be happy if you send a
> follow-up patch to add the ASSERT rather than modifying this patch.
>

I will send a follow-up patch after this series has been done.

> For this patch:
>
> Reviewed-by: Julien Grall <julien.grall@arm.com>
>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest
  2017-03-31 18:43       ` Julien Grall
@ 2017-04-05  7:15         ` Wei Chen
  0 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-04-05  7:15 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: Kaly.Xin, nd, steve.capper, xen-devel

On 2017/4/1 2:43, Julien Grall wrote:
> Hi Stefano,
>
> On 03/31/2017 07:42 PM, Stefano Stabellini wrote:
>> On Fri, 31 Mar 2017, Julien Grall wrote:
>>> Hi Wei,
>>>
>>> On 31/03/17 14:07, Wei Chen wrote:
>>>> If there is a pending SError while we're returning from trap. If the
>>>> SError handle option is "DIVERSE", we have to prevent slipping this
>>>> hypervisor SError to guest. So we have to use the dsb/isb to guarantee
>>>> that the pending hypervisor SError would be caught in hypervisor before
>>>> return to guest.
>>>>
>>>> In previous patch, we will set SKIP_CHECK_PENDING_VSERROR to cpu_hwcaps
>>>> when option is NOT "DIVERSE". This means we can use the alternative to
>>>> skip synchronizing SErrors for other SErrors handle options.
>>>>
>>>> Because we have umasked the Abort/SError bit in previous patch. We have
>>>> to disable the Abort/SError before returning to guest as we have done
>>>> for IRQ.
>>>>
>>>> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
>>>>
>>>> ---
>>>> v2->v3:
>>>> 1. Use alternative instead of check serror_op to skip sychronizing
>>>> SErrors
>>>>    while option is NOT "DIVERSE".
>>>> 2. Disable Abort/SError before returning to guest.
>>>> ---
>>>>  xen/arch/arm/traps.c | 13 +++++++++++++
>>>>  1 file changed, 13 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 2c610c4..8f1a0cd 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -2936,6 +2936,19 @@ asmlinkage void leave_hypervisor_tail(void)
>>>>          local_irq_disable();
>>>>          if (!softirq_pending(smp_processor_id())) {
>>>>              gic_inject();
>>>> +
>>>> +            /*
>>>> +             * If the SErrors handle option is "DIVERSE", we have to
>>>> prevent
>>>> +             * slipping the hypervisor SError to guest. In this
>>>> option,
>>>> before
>>>> +             * returning from trap, we have to synchronize SErrors to
>>>> guarantee
>>>> +             * that the pending SError would be caught in hypervisor.
>>>> +             *
>>>> +             * If option is NOT "DIVERSE",
>>>> SKIP_CHECK_PENDING_VSERROR will
>>>> be
>>>> +             * set to cpu_hwcaps. This means we can use the
>>>> alternative to
>>>> skip
>>>> +             * synchronizing SErrors for other SErrors handle options.
>>>> +             */
>>>> +            SYNCHRONIZE_SERROR(SKIP_CHECK_PENDING_VSERROR);
>>>
>>> The feature name SKIP_CHECK_PENDING_VSERROR does not make sense here
>>> because
>>> we are not synchronizing guest SError.
>>>
>>> I am happy to re-use the same feature, but this would need to rename.
>>> Maybe
>>> SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT?
>>
>> Just for clarity, you are suggest to use the same cpu_hwcaps feature
>> here and in patch #8, as Wei is doing, but we a different name. That's
>> fine by me.
>
> That's correct. I thought the name "SKIP_CHECK_PENDING_VSERROR" does not
> match the behavior here.
>

Yes, I will rename it.

> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  7:14     ` Wei Chen
@ 2017-04-05  7:29       ` Julien Grall
  2017-04-05  7:35         ` Wei Chen
  2017-04-05  8:08         ` Wei Chen
  0 siblings, 2 replies; 46+ messages in thread
From: Julien Grall @ 2017-04-05  7:29 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper



On 05/04/2017 08:14, Wei Chen wrote:
>> Because the ASSERT I suggested was wrong, sorry for that. It should have
>> been:
>>
>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>
>> This is because we want abort enabled when the "feature" is not present.
>>
>> This series looks good so far, so I would be happy if you send a
>> follow-up patch to add the ASSERT rather than modifying this patch.
>>
>
> I will send a follow-up patch after this series has been done.

Well, you have to resend this series. So why don't you add the ASSERT in 
the new version?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  7:29       ` Julien Grall
@ 2017-04-05  7:35         ` Wei Chen
  2017-04-05  8:02           ` Julien Grall
  2017-04-05  8:08         ` Wei Chen
  1 sibling, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-04-05  7:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

On 2017/4/5 15:29, Julien Grall wrote:
>
>
> On 05/04/2017 08:14, Wei Chen wrote:
>>> Because the ASSERT I suggested was wrong, sorry for that. It should have
>>> been:
>>>
>>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>>
>>> This is because we want abort enabled when the "feature" is not present.
>>>
>>> This series looks good so far, so I would be happy if you send a
>>> follow-up patch to add the ASSERT rather than modifying this patch.
>>>
>>
>> I will send a follow-up patch after this series has been done.
>
> Well, you have to resend this series. So why don't you add the ASSERT in
> the new version?

I am happy to do that. I misunderstood this comment "rather than 
modifying this patch".

I will add a patch in this series for ASSERT.

>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  7:35         ` Wei Chen
@ 2017-04-05  8:02           ` Julien Grall
  0 siblings, 0 replies; 46+ messages in thread
From: Julien Grall @ 2017-04-05  8:02 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Wei,

On 05/04/2017 08:35, Wei Chen wrote:
> On 2017/4/5 15:29, Julien Grall wrote:
>>
>>
>> On 05/04/2017 08:14, Wei Chen wrote:
>>>> Because the ASSERT I suggested was wrong, sorry for that. It should
>>>> have
>>>> been:
>>>>
>>>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>>>
>>>> This is because we want abort enabled when the "feature" is not
>>>> present.
>>>>
>>>> This series looks good so far, so I would be happy if you send a
>>>> follow-up patch to add the ASSERT rather than modifying this patch.
>>>>
>>>
>>> I will send a follow-up patch after this series has been done.
>>
>> Well, you have to resend this series. So why don't you add the ASSERT in
>> the new version?
>
> I am happy to do that. I misunderstood this comment "rather than
> modifying this patch".

My point was if you didn't need to resend the series, I would have been 
happy to see a follow-up avoiding sending 19 patches again for a small fix.

>
> I will add a patch in this series for ASSERT.

Again, as you resend the series. Why don't you update this patch to add 
the ASSERT?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  7:29       ` Julien Grall
  2017-04-05  7:35         ` Wei Chen
@ 2017-04-05  8:08         ` Wei Chen
  2017-04-05  8:20           ` Julien Grall
  1 sibling, 1 reply; 46+ messages in thread
From: Wei Chen @ 2017-04-05  8:08 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Julien,

On 2017/4/5 15:29, Julien Grall wrote:
>
>
> On 05/04/2017 08:14, Wei Chen wrote:
>>> Because the ASSERT I suggested was wrong, sorry for that. It should have
>>> been:
>>>
>>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>>
>>> This is because we want abort enabled when the "feature" is not present.
>>>

Think more about this assert, I feel confused.

Currently, enable abort or not has not combined with the "feature".
In my testing, the abort is always enabled in the places where we want
to use this macro.

Use "SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT" for example,
	ASSERT(cpus_have_cap(feat) && local_abort_is_enabled());
will panic by when option == DIVERSE, and 	
	ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
will panic by when option != DIVERSE

I can't see any significance about this change.

>>> This series looks good so far, so I would be happy if you send a
>>> follow-up patch to add the ASSERT rather than modifying this patch.
>>>
>>
>> I will send a follow-up patch after this series has been done.
>
> Well, you have to resend this series. So why don't you add the ASSERT in
> the new version?
>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  8:08         ` Wei Chen
@ 2017-04-05  8:20           ` Julien Grall
  2017-04-05  8:32             ` Wei Chen
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2017-04-05  8:20 UTC (permalink / raw)
  To: Wei Chen, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper



On 05/04/2017 09:08, Wei Chen wrote:
> Hi Julien,
>
> On 2017/4/5 15:29, Julien Grall wrote:
>>
>>
>> On 05/04/2017 08:14, Wei Chen wrote:
>>>> Because the ASSERT I suggested was wrong, sorry for that. It should
>>>> have
>>>> been:
>>>>
>>>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>>>
>>>> This is because we want abort enabled when the "feature" is not
>>>> present.
>>>>
>
> Think more about this assert, I feel confused.
>
> Currently, enable abort or not has not combined with the "feature".
> In my testing, the abort is always enabled in the places where we want
> to use this macro.
>
> Use "SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT" for example,
>     ASSERT(cpus_have_cap(feat) && local_abort_is_enabled());
> will panic by when option == DIVERSE, and
>     ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
> will panic by when option != DIVERSE

Because I keep making mistake in the ASSERT and I am surprised that 
nobody spot and able to fix...

This should be ASSERT(!cpus_have_cap(feat) || local_abort_is_enabled());

Thinking a bit more, we can also do an ASSERT(local_abort_is_enabled()) 
unconditionally.

> I can't see any significance about this change.

As I said earlier, the main purpose of the ASSERT is to ensure your 
assumption is correct. The abort has been unmasked in the entry but you 
don't know if someone will mask the abort later.

And yes, nobody is playing with the abort mask so far. But are you able 
to predict what will be done in the future? I am not, so the ASSERT is 
here to be sure the abort is unmasked.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError
  2017-04-05  8:20           ` Julien Grall
@ 2017-04-05  8:32             ` Wei Chen
  0 siblings, 0 replies; 46+ messages in thread
From: Wei Chen @ 2017-04-05  8:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Kaly.Xin, nd, sstabellini, steve.capper

Hi Julien,

On 2017/4/5 16:20, Julien Grall wrote:
>
>
> On 05/04/2017 09:08, Wei Chen wrote:
>> Hi Julien,
>>
>> On 2017/4/5 15:29, Julien Grall wrote:
>>>
>>>
>>> On 05/04/2017 08:14, Wei Chen wrote:
>>>>> Because the ASSERT I suggested was wrong, sorry for that. It should
>>>>> have
>>>>> been:
>>>>>
>>>>> ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>>>>>
>>>>> This is because we want abort enabled when the "feature" is not
>>>>> present.
>>>>>
>>
>> Think more about this assert, I feel confused.
>>
>> Currently, enable abort or not has not combined with the "feature".
>> In my testing, the abort is always enabled in the places where we want
>> to use this macro.
>>
>> Use "SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT" for example,
>>     ASSERT(cpus_have_cap(feat) && local_abort_is_enabled());
>> will panic by when option == DIVERSE, and
>>     ASSERT(!cpus_have_cap(feat) && local_abort_is_enabled());
>> will panic by when option != DIVERSE
>
> Because I keep making mistake in the ASSERT and I am surprised that
> nobody spot and able to fix...
>

I think maybe I still in the mood of holiday and my head is not clear
:)

> This should be ASSERT(!cpus_have_cap(feat) || local_abort_is_enabled());
>
> Thinking a bit more, we can also do an ASSERT(local_abort_is_enabled())
> unconditionally.
>
>> I can't see any significance about this change.
>
> As I said earlier, the main purpose of the ASSERT is to ensure your
> assumption is correct. The abort has been unmasked in the entry but you
> don't know if someone will mask the abort later.
>
> And yes, nobody is playing with the abort mask so far. But are you able
> to predict what will be done in the future? I am not, so the ASSERT is
> here to be sure the abort is unmasked.

I meant change from cpus_have_cap(feat) to !cpus_have_cap(feat) :)

>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2017-04-05  8:32 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-31 13:07 [PATCH v3 00/19] Provide a command line option to choose how to handle SErrors Wei Chen
2017-03-31 13:07 ` [PATCH v3 01/19] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check Wei Chen
2017-03-31 14:08   ` Julien Grall
2017-03-31 18:26   ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 02/19] xen/arm: Introduce a helper to get default HCR_EL2 flags Wei Chen
2017-03-31 14:10   ` Julien Grall
2017-03-31 18:29   ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 03/19] xen/arm: Set and restore HCR_EL2 register for each vCPU separately Wei Chen
2017-03-31 14:11   ` Julien Grall
2017-03-31 18:28   ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 04/19] xen/arm: Avoid setting/clearing HCR_RW at every context switch Wei Chen
2017-03-31 13:07 ` [PATCH v3 05/19] xen/arm: Save HCR_EL2 when a guest took the SError Wei Chen
2017-03-31 13:07 ` [PATCH v3 06/19] xen/arm: Introduce a virtual abort injection helper Wei Chen
2017-03-31 14:13   ` Julien Grall
2017-03-31 13:07 ` [PATCH v3 07/19] xen/arm: Introduce a command line parameter for SErrors/Aborts Wei Chen
2017-03-31 13:07 ` [PATCH v3 08/19] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op Wei Chen
2017-03-31 14:48   ` Julien Grall
2017-04-05  6:36     ` Wei Chen
2017-03-31 13:07 ` [PATCH v3 09/19] xen/arm64: Use alternative to skip the check of pending serrors Wei Chen
2017-03-31 13:07 ` [PATCH v3 10/19] xen/arm32: " Wei Chen
2017-03-31 13:07 ` [PATCH v3 11/19] xen/arm: Move macro VABORT_GEN_BY_GUEST to common header Wei Chen
2017-03-31 13:07 ` [PATCH v3 12/19] xen/arm: Introduce new helpers to handle guest/hyp SErrors Wei Chen
2017-03-31 13:07 ` [PATCH v3 13/19] xen/arm: Replace do_trap_guest_serror with new helpers Wei Chen
2017-03-31 13:07 ` [PATCH v3 14/19] xen/arm: Unmask the Abort/SError bit in the exception entries Wei Chen
2017-03-31 13:07 ` [PATCH v3 15/19] xen/arm: Introduce a helper to check local abort is enabled Wei Chen
2017-03-31 14:25   ` Julien Grall
2017-03-31 18:43   ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 16/19] xen/arm: Introduce a macro to synchronize SError Wei Chen
2017-03-31 14:33   ` Julien Grall
2017-04-05  7:14     ` Wei Chen
2017-04-05  7:29       ` Julien Grall
2017-04-05  7:35         ` Wei Chen
2017-04-05  8:02           ` Julien Grall
2017-04-05  8:08         ` Wei Chen
2017-04-05  8:20           ` Julien Grall
2017-04-05  8:32             ` Wei Chen
2017-03-31 18:36   ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 17/19] xen/arm: Isolate the SError between the context switch of 2 vCPUs Wei Chen
2017-03-31 14:38   ` Julien Grall
2017-03-31 18:37     ` Stefano Stabellini
2017-03-31 13:07 ` [PATCH v3 18/19] xen/arm: Prevent slipping hypervisor SError to guest Wei Chen
2017-03-31 14:46   ` Julien Grall
2017-03-31 18:42     ` Stefano Stabellini
2017-03-31 18:43       ` Julien Grall
2017-04-05  7:15         ` Wei Chen
2017-03-31 13:07 ` [PATCH v3 19/19] xen/arm: Handle guest external abort as guest SError Wei Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.