qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] hw/arm/virt: Simulate NMI Injection
@ 2020-02-18  2:04 Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 1/3] target/arm: Support SError injection Gavin Shan
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Gavin Shan @ 2020-02-18  2:04 UTC (permalink / raw)
  To: qemu-devel, qemu-arm
  Cc: peter.maydell, drjones, jthierry, aik, maz, richard.henderson,
	eric.auger, shan.gavin, pbonzini

This series simulates the behavior of receiving NMI interrupt for "virt"
board. First of all, a new interrupt (SError) is supported for each CPU.
The backend is either sending error events through kvm module or emulating
the bahavior when TCG is enabled. The outcome is SError or data abort is
raised to crash guest. In the mean while, the virtual SError interrupt is
also supported, but there is no users yet.

For GICv2 or GICv3, a new IRQ line is added for each CPU and it's connected
to the (above) introduced SError interrupt. The IRQ line of CPU#0 is raised
when HMP/QMP "nmi" is issued, to crash the guest.

Testing
=======

After the HMP/QMP "nmi" is issued in the following 4 environment, the guest is
crashed as expected.

   Accel     Mode                  Crashed    Parameter
   ------------------------------------------------------------------------
   kvm       aarch64               yes        -machine virt -cpu host
   kvm       aarch32(cortex-a15)   yes        -machine virt -cpu host,aarch64=off
   tcg       aarch64               yes        -machine virt -cpu max
   tcg       aarch32(cortex-a15)   yes        -machine virt -cpu cortex-a15

Changelog
=========
v4:
   * Correct the flag in arm_cpu_has_work()               (Richard Henderson)
   * Check CPU_INTERRUPT_SERROR in arm_cpu_exec_interrupt()
     and arm_v7m_cpu_exec_interrupt()                     (Richard Henderson)
   * Introduce ARM_CPU_NUM_IRQ to make the code of initializing
     the CPU's inbound IRQ lines atomic                   (Richard Henderson)
   * Correct comments about ARM_CPU_IRQ                   (Richard Henderson)
   * Update ISR.EL1 with SError state                     (Gavin Shan)
   * Include SError state during migration                (Gavin Shan)
   * Added PATCH[2/3] to support VSError injection        (Marc Zyngier)
v3:
   * Support SError injection for aarch32                 (Richard Henderson)
   * Export the SError injection through IRQ line         (Peter Maydell)
   * Removed RFC tag as it seems in correct track         (Gavin Shan)
v2:
   * Redesigned to fully exploit SError interrupt

Gavin Shan (3):
  target/arm: Support SError injection
  target/arm: Support VSError injection
  hw/arm/virt: Simulate NMI injection

 hw/arm/virt.c                      |  34 ++++++++-
 hw/intc/arm_gic_common.c           |   3 +
 hw/intc/arm_gicv3_common.c         |   3 +
 include/hw/intc/arm_gic_common.h   |   1 +
 include/hw/intc/arm_gicv3_common.h |   1 +
 target/arm/cpu.c                   | 113 ++++++++++++++++++++++++++---
 target/arm/cpu.h                   |  23 ++++--
 target/arm/helper.c                |  30 ++++++++
 target/arm/internals.h             |  10 +++
 target/arm/m_helper.c              |   8 ++
 target/arm/machine.c               |   3 +-
 11 files changed, 208 insertions(+), 21 deletions(-)

-- 
2.23.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/3] target/arm: Support SError injection
  2020-02-18  2:04 [PATCH v4 0/3] hw/arm/virt: Simulate NMI Injection Gavin Shan
@ 2020-02-18  2:04 ` Gavin Shan
  2020-02-18 16:28   ` Marc Zyngier
  2020-02-18  2:04 ` [PATCH v4 2/3] target/arm: Support VSError injection Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 3/3] hw/arm/virt: Simulate NMI injection Gavin Shan
  2 siblings, 1 reply; 7+ messages in thread
From: Gavin Shan @ 2020-02-18  2:04 UTC (permalink / raw)
  To: qemu-devel, qemu-arm
  Cc: peter.maydell, drjones, jthierry, aik, maz, richard.henderson,
	eric.auger, shan.gavin, pbonzini

This supports SError injection, which will be used by "virt" board to
simulating the behavior of NMI injection in next patch. As Peter Maydell
suggested, this adds a new interrupt (ARM_CPU_SERROR), which is parallel
to CPU_INTERRUPT_HARD. The backend depends on if kvm is enabled or not.
kvm_vcpu_ioctl(cpu, KVM_SET_VCPU_EVENTS) is leveraged to inject SError
or data abort to guest. When TCG is enabled, the behavior is simulated
by injecting SError and data abort to guest.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 target/arm/cpu.c      | 69 +++++++++++++++++++++++++++++++++++--------
 target/arm/cpu.h      | 20 ++++++++-----
 target/arm/helper.c   | 12 ++++++++
 target/arm/m_helper.c |  8 +++++
 target/arm/machine.c  |  3 +-
 5 files changed, 91 insertions(+), 21 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index de733aceeb..e5750080bc 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -78,7 +78,7 @@ static bool arm_cpu_has_work(CPUState *cs)
         && cs->interrupt_request &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
-         | CPU_INTERRUPT_EXITTB);
+         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_EXITTB);
 }
 
 void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
@@ -449,6 +449,9 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
             return false;
         }
         return !(env->daif & PSTATE_I);
+    case EXCP_SERROR:
+       pstate_unmasked = !(env->daif & PSTATE_A);
+       break;
     default:
         g_assert_not_reached();
     }
@@ -538,6 +541,15 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
 
     /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
 
+    if (interrupt_request & CPU_INTERRUPT_SERROR) {
+        excp_idx = EXCP_SERROR;
+        target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+        if (arm_excp_unmasked(cs, excp_idx, target_el,
+                              cur_el, secure, hcr_el2)) {
+            goto found;
+        }
+    }
+
     if (interrupt_request & CPU_INTERRUPT_FIQ) {
         excp_idx = EXCP_FIQ;
         target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
@@ -570,6 +582,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
             goto found;
         }
     }
+
     return false;
 
  found:
@@ -585,7 +598,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     CPUClass *cc = CPU_GET_CLASS(cs);
     ARMCPU *cpu = ARM_CPU(cs);
     CPUARMState *env = &cpu->env;
-    bool ret = false;
+    uint32_t excp_idx;
 
     /* ARMv7-M interrupt masking works differently than -A or -R.
      * There is no FIQ/IRQ distinction. Instead of I and F bits
@@ -594,13 +607,26 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
      * (which depends on state like BASEPRI, FAULTMASK and the
      * currently active exception).
      */
-    if (interrupt_request & CPU_INTERRUPT_HARD
-        && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
-        cs->exception_index = EXCP_IRQ;
-        cc->do_interrupt(cs);
-        ret = true;
+    if (!armv7m_nvic_can_take_pending_exception(env->nvic)) {
+        return false;
+    }
+
+    if (interrupt_request & CPU_INTERRUPT_SERROR) {
+        excp_idx = EXCP_SERROR;
+        goto found;
+    }
+
+    if (interrupt_request & CPU_INTERRUPT_HARD) {
+        excp_idx = EXCP_IRQ;
+        goto found;
     }
-    return ret;
+
+    return false;
+
+found:
+    cs->exception_index = excp_idx;
+    cc->do_interrupt(cs);
+    return true;
 }
 #endif
 
@@ -656,7 +682,8 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
         [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
         [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
         [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
-        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
+        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
+        [ARM_CPU_SERROR] = CPU_INTERRUPT_SERROR,
     };
 
     if (level) {
@@ -676,6 +703,7 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
         break;
     case ARM_CPU_IRQ:
     case ARM_CPU_FIQ:
+    case ARM_CPU_SERROR:
         if (level) {
             cpu_interrupt(cs, mask[irq]);
         } else {
@@ -693,8 +721,10 @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
     ARMCPU *cpu = opaque;
     CPUARMState *env = &cpu->env;
     CPUState *cs = CPU(cpu);
+    struct kvm_vcpu_events events;
     uint32_t linestate_bit;
     int irq_id;
+    bool inject_irq = true;
 
     switch (irq) {
     case ARM_CPU_IRQ:
@@ -705,6 +735,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
         irq_id = KVM_ARM_IRQ_CPU_FIQ;
         linestate_bit = CPU_INTERRUPT_FIQ;
         break;
+    case ARM_CPU_SERROR:
+        if (!kvm_has_vcpu_events()) {
+            return;
+        }
+
+        inject_irq = false;
+        linestate_bit = CPU_INTERRUPT_SERROR;
+        break;
     default:
         g_assert_not_reached();
     }
@@ -714,7 +752,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
     } else {
         env->irq_line_state &= ~linestate_bit;
     }
-    kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
+
+    if (inject_irq) {
+        kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
+    } else if (level) {
+        memset(&events, 0, sizeof(events));
+        events.exception.serror_pending = 1;
+        kvm_vcpu_ioctl(cs, KVM_SET_VCPU_EVENTS, &events);
+    }
 #endif
 }
 
@@ -1064,9 +1109,9 @@ static void arm_cpu_initfn(Object *obj)
         /* VIRQ and VFIQ are unused with KVM but we add them to maintain
          * the same interface as non-KVM CPUs.
          */
-        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
+        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, ARM_CPU_NUM_IRQ);
     } else {
-        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
+        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, ARM_CPU_NUM_IRQ);
     }
 
     qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index e943ffe8a9..23e9f7ee2d 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -49,6 +49,7 @@
 #define EXCP_LAZYFP         20   /* v7M fault during lazy FP stacking */
 #define EXCP_LSERR          21   /* v8M LSERR SecureFault */
 #define EXCP_UNALIGNED      22   /* v7M UNALIGNED UsageFault */
+#define EXCP_SERROR         23   /* SError Interrupt */
 /* NB: add new EXCP_ defines to the array in arm_log_exception() too */
 
 #define ARMV7M_EXCP_RESET   1
@@ -79,9 +80,10 @@ enum {
 };
 
 /* ARM-specific interrupt pending bits.  */
-#define CPU_INTERRUPT_FIQ   CPU_INTERRUPT_TGT_EXT_1
-#define CPU_INTERRUPT_VIRQ  CPU_INTERRUPT_TGT_EXT_2
-#define CPU_INTERRUPT_VFIQ  CPU_INTERRUPT_TGT_EXT_3
+#define CPU_INTERRUPT_FIQ     CPU_INTERRUPT_TGT_EXT_1
+#define CPU_INTERRUPT_VIRQ    CPU_INTERRUPT_TGT_EXT_2
+#define CPU_INTERRUPT_VFIQ    CPU_INTERRUPT_TGT_EXT_3
+#define CPU_INTERRUPT_SERROR  CPU_INTERRUPT_TGT_EXT_4
 
 /* The usual mapping for an AArch64 system register to its AArch32
  * counterpart is for the 32 bit world to have access to the lower
@@ -97,11 +99,13 @@ enum {
 #define offsetofhigh32(S, M) (offsetof(S, M) + sizeof(uint32_t))
 #endif
 
-/* Meanings of the ARMCPU object's four inbound GPIO lines */
-#define ARM_CPU_IRQ 0
-#define ARM_CPU_FIQ 1
-#define ARM_CPU_VIRQ 2
-#define ARM_CPU_VFIQ 3
+/* ARMCPU object's inbound GPIO lines */
+#define ARM_CPU_IRQ     0
+#define ARM_CPU_FIQ     1
+#define ARM_CPU_VIRQ    2
+#define ARM_CPU_VFIQ    3
+#define ARM_CPU_SERROR  4
+#define ARM_CPU_NUM_IRQ 5
 
 /* ARM-specific extra insn start words:
  * 1: Conditional execution bits
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 366dbcf460..3f00af4c41 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1969,6 +1969,12 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
         }
     }
 
+    if (!allow_virt || !(hcr_el2 & HCR_AMO)) {
+        if (cs->interrupt_request & CPU_INTERRUPT_SERROR) {
+            ret |= CPSR_A;
+        }
+    }
+
     /* External aborts are not possible in QEMU so A bit is always clear */
     return ret;
 }
@@ -8598,6 +8604,7 @@ void arm_log_exception(int idx)
             [EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
             [EXCP_LSERR] = "v8M LSERR UsageFault",
             [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
+            [EXCP_SERROR] = "SError Interrupt",
         };
 
         if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
@@ -8923,6 +8930,7 @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
         addr = 0x0c;
         break;
     case EXCP_DATA_ABORT:
+    case EXCP_SERROR:
         env->cp15.dfar_s = env->exception.vaddress;
         qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n",
                       (uint32_t)env->exception.vaddress);
@@ -9051,6 +9059,7 @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
         offset = 4;
         break;
     case EXCP_DATA_ABORT:
+    case EXCP_SERROR:
         A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
         A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress);
         qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n",
@@ -9213,6 +9222,9 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
     case EXCP_VFIQ:
         addr += 0x100;
         break;
+    case EXCP_SERROR:
+        addr += 0x180;
+        break;
     default:
         cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
     }
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
index 33d414a684..a7271cc386 100644
--- a/target/arm/m_helper.c
+++ b/target/arm/m_helper.c
@@ -2211,6 +2211,14 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
          * v7m_preserve_fp_state() helper function.
          */
         break;
+    case EXCP_SERROR:
+        env->v7m.cfsr[M_REG_NS] |=
+            (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
+        env->v7m.bfar = env->exception.vaddress;
+        qemu_log_mask(CPU_LOG_INT,
+                      "...with CFSR.PRECISERR and BFAR 0x%x\n",
+                      env->v7m.bfar);
+        break;
     default:
         cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
         return; /* Never happens.  Keep compiler happy.  */
diff --git a/target/arm/machine.c b/target/arm/machine.c
index 241890ac8c..e2ad2f156e 100644
--- a/target/arm/machine.c
+++ b/target/arm/machine.c
@@ -714,7 +714,8 @@ static int cpu_post_load(void *opaque, int version_id)
 
         env->irq_line_state = cs->interrupt_request &
             (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
-             CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
+             CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ |
+             CPU_INTERRUPT_SERROR);
     }
 
     /* Update the values list from the incoming migration data.
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] target/arm: Support VSError injection
  2020-02-18  2:04 [PATCH v4 0/3] hw/arm/virt: Simulate NMI Injection Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 1/3] target/arm: Support SError injection Gavin Shan
@ 2020-02-18  2:04 ` Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 3/3] hw/arm/virt: Simulate NMI injection Gavin Shan
  2 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2020-02-18  2:04 UTC (permalink / raw)
  To: qemu-devel, qemu-arm
  Cc: peter.maydell, drjones, jthierry, aik, maz, richard.henderson,
	eric.auger, shan.gavin, pbonzini

This supports virtual SError injection, which can be used to inject
SError to guest running on the emulated hypervisor. The functionality
is enabled only when we're in non-secured mode and {HCR.TGE, HCR.AMO}
are set to {0, 1}. Also, it can be masked by PState.A bit. Apart from
that, the implementation is similar to VFIQ.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 target/arm/cpu.c       | 48 +++++++++++++++++++++++++++++++++++++++++-
 target/arm/cpu.h       | 13 +++++++-----
 target/arm/helper.c    | 20 +++++++++++++++++-
 target/arm/internals.h | 10 +++++++++
 target/arm/machine.c   |  2 +-
 5 files changed, 85 insertions(+), 8 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index e5750080bc..5969674941 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -78,7 +78,8 @@ static bool arm_cpu_has_work(CPUState *cs)
         && cs->interrupt_request &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
-         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_EXITTB);
+         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_VSERROR
+         | CPU_INTERRUPT_EXITTB);
 }
 
 void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
@@ -452,6 +453,12 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
     case EXCP_SERROR:
        pstate_unmasked = !(env->daif & PSTATE_A);
        break;
+    case EXCP_VSERROR:
+        if (secure || !(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
+            /* VSError is only taken when hypervized and non-secure.  */
+            return false;
+        }
+        return !(env->daif & PSTATE_A);
     default:
         g_assert_not_reached();
     }
@@ -550,6 +557,15 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         }
     }
 
+    if (interrupt_request & CPU_INTERRUPT_VSERROR) {
+        excp_idx = EXCP_VSERROR;
+        target_el = 1;
+        if (arm_excp_unmasked(cs, excp_idx, target_el,
+                              cur_el, secure, hcr_el2)) {
+            goto found;
+        }
+    }
+
     if (interrupt_request & CPU_INTERRUPT_FIQ) {
         excp_idx = EXCP_FIQ;
         target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
@@ -558,6 +574,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
             goto found;
         }
     }
+
     if (interrupt_request & CPU_INTERRUPT_HARD) {
         excp_idx = EXCP_IRQ;
         target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
@@ -566,6 +583,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
             goto found;
         }
     }
+
     if (interrupt_request & CPU_INTERRUPT_VIRQ) {
         excp_idx = EXCP_VIRQ;
         target_el = 1;
@@ -574,6 +592,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
             goto found;
         }
     }
+
     if (interrupt_request & CPU_INTERRUPT_VFIQ) {
         excp_idx = EXCP_VFIQ;
         target_el = 1;
@@ -672,6 +691,28 @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
     }
 }
 
+void arm_cpu_update_vserror(ARMCPU *cpu)
+{
+    /*
+     * Update the interrupt level for virtual SError, which is the logical
+     * OR of the HCR_EL2.VSE bit and the input line level from the GIC.
+     */
+    CPUARMState *env = &cpu->env;
+    CPUState *cs = CPU(cpu);
+
+    bool new_state = (env->cp15.hcr_el2 & HCR_VSE) ||
+        (env->irq_line_state & CPU_INTERRUPT_VSERROR);
+
+    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERROR) != 0)) {
+        if (new_state) {
+            cpu_interrupt(cs, CPU_INTERRUPT_VSERROR);
+        } else {
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERROR);
+        }
+    }
+}
+
+
 #ifndef CONFIG_USER_ONLY
 static void arm_cpu_set_irq(void *opaque, int irq, int level)
 {
@@ -684,6 +725,7 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
         [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
         [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
         [ARM_CPU_SERROR] = CPU_INTERRUPT_SERROR,
+        [ARM_CPU_VSERROR] = CPU_INTERRUPT_VSERROR,
     };
 
     if (level) {
@@ -710,6 +752,10 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
             cpu_reset_interrupt(cs, mask[irq]);
         }
         break;
+    case ARM_CPU_VSERROR:
+        assert(arm_feature(env, ARM_FEATURE_EL2));
+        arm_cpu_update_vserror(cpu);
+        break;
     default:
         g_assert_not_reached();
     }
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 23e9f7ee2d..30056c6dbc 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -50,6 +50,7 @@
 #define EXCP_LSERR          21   /* v8M LSERR SecureFault */
 #define EXCP_UNALIGNED      22   /* v7M UNALIGNED UsageFault */
 #define EXCP_SERROR         23   /* SError Interrupt */
+#define EXCP_VSERROR        24   /* Virtual SError Interrupt */
 /* NB: add new EXCP_ defines to the array in arm_log_exception() too */
 
 #define ARMV7M_EXCP_RESET   1
@@ -80,10 +81,11 @@ enum {
 };
 
 /* ARM-specific interrupt pending bits.  */
-#define CPU_INTERRUPT_FIQ     CPU_INTERRUPT_TGT_EXT_1
-#define CPU_INTERRUPT_VIRQ    CPU_INTERRUPT_TGT_EXT_2
-#define CPU_INTERRUPT_VFIQ    CPU_INTERRUPT_TGT_EXT_3
-#define CPU_INTERRUPT_SERROR  CPU_INTERRUPT_TGT_EXT_4
+#define CPU_INTERRUPT_FIQ     CPU_INTERRUPT_TGT_EXT_0
+#define CPU_INTERRUPT_VIRQ    CPU_INTERRUPT_TGT_EXT_1
+#define CPU_INTERRUPT_VFIQ    CPU_INTERRUPT_TGT_EXT_2
+#define CPU_INTERRUPT_SERROR  CPU_INTERRUPT_TGT_EXT_3
+#define CPU_INTERRUPT_VSERROR CPU_INTERRUPT_TGT_EXT_4
 
 /* The usual mapping for an AArch64 system register to its AArch32
  * counterpart is for the 32 bit world to have access to the lower
@@ -105,7 +107,8 @@ enum {
 #define ARM_CPU_VIRQ    2
 #define ARM_CPU_VFIQ    3
 #define ARM_CPU_SERROR  4
-#define ARM_CPU_NUM_IRQ 5
+#define ARM_CPU_VSERROR 5
+#define ARM_CPU_NUM_IRQ 6
 
 /* ARM-specific extra insn start words:
  * 1: Conditional execution bits
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 3f00af4c41..7fa6653f10 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1969,7 +1969,11 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
         }
     }
 
-    if (!allow_virt || !(hcr_el2 & HCR_AMO)) {
+    if (allow_virt && (hcr_el2 & HCR_AMO)) {
+        if (cs->interrupt_request & CPU_INTERRUPT_VSERROR) {
+            ret |= CPSR_A;
+        }
+    } else {
         if (cs->interrupt_request & CPU_INTERRUPT_SERROR) {
             ret |= CPSR_A;
         }
@@ -5103,6 +5107,7 @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
     g_assert(qemu_mutex_iothread_locked());
     arm_cpu_update_virq(cpu);
     arm_cpu_update_vfiq(cpu);
+    arm_cpu_update_vserror(cpu);
 }
 
 static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -8605,6 +8610,7 @@ void arm_log_exception(int idx)
             [EXCP_LSERR] = "v8M LSERR UsageFault",
             [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
             [EXCP_SERROR] = "SError Interrupt",
+            [EXCP_VSERROR] = "Virtual SError Interrupt",
         };
 
         if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
@@ -9113,6 +9119,17 @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
         mask = CPSR_A | CPSR_I | CPSR_F;
         offset = 0;
         break;
+    case EXCP_VSERROR:
+        A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
+        A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress);
+        qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n",
+                      env->exception.fsr,
+                      (uint32_t)env->exception.vaddress);
+        new_mode = ARM_CPU_MODE_ABT;
+        addr = 0x10;
+        mask = CPSR_A | CPSR_I;
+        offset = 8;
+        break;
     default:
         cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
         return; /* Never happens.  Keep compiler happy.  */
@@ -9223,6 +9240,7 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
         addr += 0x100;
         break;
     case EXCP_SERROR:
+    case EXCP_VSERROR:
         addr += 0x180;
         break;
     default:
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 58c4d707c5..4625bf984e 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1023,6 +1023,16 @@ void arm_cpu_update_virq(ARMCPU *cpu);
  */
 void arm_cpu_update_vfiq(ARMCPU *cpu);
 
+/**
+ * arm_cpu_update_vserror: Update CPU_INTERRUPT_VSERROR interrupt
+ *
+ * Update the CPU_INTERRUPT_VSERROR bit in cs->interrupt_request, following
+ * a change to either the input virtual SError line from the GIC or the
+ * HCR_EL2.VSE bit. Must be called with the iothread lock held.
+ */
+void arm_cpu_update_vserror(ARMCPU *cpu);
+
+
 /**
  * arm_mmu_idx_el:
  * @env: The cpu environment
diff --git a/target/arm/machine.c b/target/arm/machine.c
index e2ad2f156e..1bc9319f9b 100644
--- a/target/arm/machine.c
+++ b/target/arm/machine.c
@@ -715,7 +715,7 @@ static int cpu_post_load(void *opaque, int version_id)
         env->irq_line_state = cs->interrupt_request &
             (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
              CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ |
-             CPU_INTERRUPT_SERROR);
+             CPU_INTERRUPT_SERROR | CPU_INTERRUPT_VSERROR);
     }
 
     /* Update the values list from the incoming migration data.
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/3] hw/arm/virt: Simulate NMI injection
  2020-02-18  2:04 [PATCH v4 0/3] hw/arm/virt: Simulate NMI Injection Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 1/3] target/arm: Support SError injection Gavin Shan
  2020-02-18  2:04 ` [PATCH v4 2/3] target/arm: Support VSError injection Gavin Shan
@ 2020-02-18  2:04 ` Gavin Shan
  2 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2020-02-18  2:04 UTC (permalink / raw)
  To: qemu-devel, qemu-arm
  Cc: peter.maydell, drjones, jthierry, aik, maz, richard.henderson,
	eric.auger, shan.gavin, pbonzini

This implements the backend to support HMP/QMP "nmi" command, which is
used to inject NMI interrupt to crash guest for debugging purpose. As
ARM architecture doesn't have NMI supported, so we're simulating the
behaviour by injecting SError or data abort to guest for "virt" board.

An additonal IRQ line is introduced for SError on each CPU. The IRQ line
is connected to SError exception handler. The IRQ line on CPU#0 is raised
when HMP/QMP "nmi" is issued to inject SError or data abort to crash guest.
Note the IRQ line can be shared with other devices who want to have the
capability of reporting errors in future.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 hw/arm/virt.c                      | 34 +++++++++++++++++++++++++++++-
 hw/intc/arm_gic_common.c           |  3 +++
 hw/intc/arm_gicv3_common.c         |  3 +++
 include/hw/intc/arm_gic_common.h   |  1 +
 include/hw/intc/arm_gicv3_common.h |  1 +
 5 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index f788fe27d6..78549faa75 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -71,6 +71,8 @@
 #include "hw/mem/pc-dimm.h"
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/generic_event_device.h"
+#include "sysemu/hw_accel.h"
+#include "hw/nmi.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
     static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
@@ -690,7 +692,7 @@ static void create_gic(VirtMachineState *vms)
         } else if (vms->virt) {
             qemu_irq irq = qdev_get_gpio_in(vms->gic,
                                             ppibase + ARCH_GIC_MAINT_IRQ);
-            sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
+            sysbus_connect_irq(gicbusdev, i + 5 * smp_cpus, irq);
         }
 
         qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
@@ -704,6 +706,8 @@ static void create_gic(VirtMachineState *vms)
                            qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
         sysbus_connect_irq(gicbusdev, i + 3 * smp_cpus,
                            qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
+        sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus,
+                           qdev_get_gpio_in(cpudev, ARM_CPU_SERROR));
     }
 
     fdt_add_gic_node(vms);
@@ -2026,10 +2030,36 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
     return requested_pa_size > 40 ? requested_pa_size : 0;
 }
 
+
+static void do_inject_serror(CPUState *cpu, run_on_cpu_data data)
+{
+    VirtMachineState *vms = data.host_ptr;
+    GICv3State *gicv3;
+    GICState *gicv2;
+
+    cpu_synchronize_state(cpu);
+
+    if (vms->gic_version == 3) {
+        gicv3 = ARM_GICV3_COMMON(OBJECT(vms->gic));
+        qemu_irq_raise(gicv3->cpu[0].parent_serror);
+    } else {
+        gicv2 = ARM_GIC_COMMON(OBJECT(vms->gic));
+        qemu_irq_raise(gicv2->parent_serror[0]);
+    }
+}
+
+static void virt_inject_serror(NMIState *n, int cpu_index, Error **errp)
+{
+    VirtMachineState *vms = VIRT_MACHINE(n);
+
+    async_run_on_cpu(first_cpu, do_inject_serror, RUN_ON_CPU_HOST_PTR(vms));
+}
+
 static void virt_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
     HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(oc);
+    NMIClass *nc = NMI_CLASS(oc);
 
     mc->init = machvirt_init;
     /* Start with max_cpus set to 512, which is the maximum supported by KVM.
@@ -2058,6 +2088,7 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     hc->unplug_request = virt_machine_device_unplug_request_cb;
     mc->numa_mem_supported = true;
     mc->auto_enable_numa_with_memhp = true;
+    nc->nmi_monitor_handler = virt_inject_serror;
 }
 
 static void virt_instance_init(Object *obj)
@@ -2141,6 +2172,7 @@ static const TypeInfo virt_machine_info = {
     .instance_init = virt_instance_init,
     .interfaces = (InterfaceInfo[]) {
          { TYPE_HOTPLUG_HANDLER },
+         { TYPE_NMI },
          { }
     },
 };
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
index e6c4fe7a5a..f39cefdeea 100644
--- a/hw/intc/arm_gic_common.c
+++ b/hw/intc/arm_gic_common.c
@@ -155,6 +155,9 @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
     for (i = 0; i < s->num_cpu; i++) {
         sysbus_init_irq(sbd, &s->parent_vfiq[i]);
     }
+    for (i = 0; i < s->num_cpu; i++) {
+        sysbus_init_irq(sbd, &s->parent_serror[i]);
+    }
     if (s->virt_extn) {
         for (i = 0; i < s->num_cpu; i++) {
             sysbus_init_irq(sbd, &s->maintenance_irq[i]);
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index 58ef65f589..19a04449a0 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -288,6 +288,9 @@ void gicv3_init_irqs_and_mmio(GICv3State *s, qemu_irq_handler handler,
     for (i = 0; i < s->num_cpu; i++) {
         sysbus_init_irq(sbd, &s->cpu[i].parent_vfiq);
     }
+    for (i = 0; i < s->num_cpu; i++) {
+        sysbus_init_irq(sbd, &s->cpu[i].parent_serror);
+    }
 
     memory_region_init_io(&s->iomem_dist, OBJECT(s), ops, s,
                           "gicv3_dist", 0x10000);
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
index b5585fec45..4cdeed7725 100644
--- a/include/hw/intc/arm_gic_common.h
+++ b/include/hw/intc/arm_gic_common.h
@@ -70,6 +70,7 @@ typedef struct GICState {
     qemu_irq parent_fiq[GIC_NCPU];
     qemu_irq parent_virq[GIC_NCPU];
     qemu_irq parent_vfiq[GIC_NCPU];
+    qemu_irq parent_serror[GIC_NCPU];
     qemu_irq maintenance_irq[GIC_NCPU];
 
     /* GICD_CTLR; for a GIC with the security extensions the NS banked version
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
index 31ec9a1ae4..a025a04727 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -152,6 +152,7 @@ struct GICv3CPUState {
     qemu_irq parent_fiq;
     qemu_irq parent_virq;
     qemu_irq parent_vfiq;
+    qemu_irq parent_serror;
     qemu_irq maintenance_irq;
 
     /* Redistributor */
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] target/arm: Support SError injection
  2020-02-18  2:04 ` [PATCH v4 1/3] target/arm: Support SError injection Gavin Shan
@ 2020-02-18 16:28   ` Marc Zyngier
  2020-02-18 23:09     ` Gavin Shan
  0 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2020-02-18 16:28 UTC (permalink / raw)
  To: Gavin Shan
  Cc: peter.maydell, drjones, jthierry, aik, richard.henderson,
	qemu-devel, eric.auger, qemu-arm, shan.gavin, pbonzini

On 2020-02-18 02:04, Gavin Shan wrote:
> This supports SError injection, which will be used by "virt" board to
> simulating the behavior of NMI injection in next patch. As Peter 
> Maydell
> suggested, this adds a new interrupt (ARM_CPU_SERROR), which is 
> parallel
> to CPU_INTERRUPT_HARD. The backend depends on if kvm is enabled or not.
> kvm_vcpu_ioctl(cpu, KVM_SET_VCPU_EVENTS) is leveraged to inject SError
> or data abort to guest. When TCG is enabled, the behavior is simulated
> by injecting SError and data abort to guest.

s/and/or/ (you can't inject both at the same time).

> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  target/arm/cpu.c      | 69 +++++++++++++++++++++++++++++++++++--------
>  target/arm/cpu.h      | 20 ++++++++-----
>  target/arm/helper.c   | 12 ++++++++
>  target/arm/m_helper.c |  8 +++++
>  target/arm/machine.c  |  3 +-
>  5 files changed, 91 insertions(+), 21 deletions(-)
> 
> diff --git a/target/arm/cpu.c b/target/arm/cpu.c
> index de733aceeb..e5750080bc 100644
> --- a/target/arm/cpu.c
> +++ b/target/arm/cpu.c
> @@ -78,7 +78,7 @@ static bool arm_cpu_has_work(CPUState *cs)
>          && cs->interrupt_request &
>          (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
>           | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
> -         | CPU_INTERRUPT_EXITTB);
> +         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_EXITTB);
>  }
> 
>  void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn 
> *hook,
> @@ -449,6 +449,9 @@ static inline bool arm_excp_unmasked(CPUState *cs,
> unsigned int excp_idx,
>              return false;
>          }
>          return !(env->daif & PSTATE_I);
> +    case EXCP_SERROR:
> +       pstate_unmasked = !(env->daif & PSTATE_A);
> +       break;

nit: Consider keeping the physical interrupts together, as they are 
closely
related.

>      default:
>          g_assert_not_reached();
>      }
> @@ -538,6 +541,15 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
> interrupt_request)
> 
>      /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
> 
> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
> +        excp_idx = EXCP_SERROR;
> +        target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, 
> secure);
> +        if (arm_excp_unmasked(cs, excp_idx, target_el,
> +                              cur_el, secure, hcr_el2)) {
> +            goto found;
> +        }
> +    }
> +
>      if (interrupt_request & CPU_INTERRUPT_FIQ) {
>          excp_idx = EXCP_FIQ;
>          target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, 
> secure);
> @@ -570,6 +582,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
> interrupt_request)
>              goto found;
>          }
>      }
> +
>      return false;
> 
>   found:
> @@ -585,7 +598,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
> *cs, int interrupt_request)
>      CPUClass *cc = CPU_GET_CLASS(cs);
>      ARMCPU *cpu = ARM_CPU(cs);
>      CPUARMState *env = &cpu->env;
> -    bool ret = false;
> +    uint32_t excp_idx;
> 
>      /* ARMv7-M interrupt masking works differently than -A or -R.
>       * There is no FIQ/IRQ distinction. Instead of I and F bits
> @@ -594,13 +607,26 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
> *cs, int interrupt_request)
>       * (which depends on state like BASEPRI, FAULTMASK and the
>       * currently active exception).
>       */
> -    if (interrupt_request & CPU_INTERRUPT_HARD
> -        && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
> -        cs->exception_index = EXCP_IRQ;
> -        cc->do_interrupt(cs);
> -        ret = true;
> +    if (!armv7m_nvic_can_take_pending_exception(env->nvic)) {
> +        return false;
> +    }
> +
> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
> +        excp_idx = EXCP_SERROR;
> +        goto found;
> +    }
> +
> +    if (interrupt_request & CPU_INTERRUPT_HARD) {
> +        excp_idx = EXCP_IRQ;
> +        goto found;
>      }
> -    return ret;
> +
> +    return false;
> +
> +found:
> +    cs->exception_index = excp_idx;
> +    cc->do_interrupt(cs);
> +    return true;
>  }
>  #endif
> 
> @@ -656,7 +682,8 @@ static void arm_cpu_set_irq(void *opaque, int irq,
> int level)
>          [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
>          [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
>          [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
> -        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
> +        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
> +        [ARM_CPU_SERROR] = CPU_INTERRUPT_SERROR,
>      };
> 
>      if (level) {
> @@ -676,6 +703,7 @@ static void arm_cpu_set_irq(void *opaque, int irq,
> int level)
>          break;
>      case ARM_CPU_IRQ:
>      case ARM_CPU_FIQ:
> +    case ARM_CPU_SERROR:
>          if (level) {
>              cpu_interrupt(cs, mask[irq]);
>          } else {
> @@ -693,8 +721,10 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> irq, int level)
>      ARMCPU *cpu = opaque;
>      CPUARMState *env = &cpu->env;
>      CPUState *cs = CPU(cpu);
> +    struct kvm_vcpu_events events;
>      uint32_t linestate_bit;
>      int irq_id;
> +    bool inject_irq = true;
> 
>      switch (irq) {
>      case ARM_CPU_IRQ:
> @@ -705,6 +735,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> irq, int level)
>          irq_id = KVM_ARM_IRQ_CPU_FIQ;
>          linestate_bit = CPU_INTERRUPT_FIQ;
>          break;
> +    case ARM_CPU_SERROR:
> +        if (!kvm_has_vcpu_events()) {
> +            return;
> +        }
> +
> +        inject_irq = false;
> +        linestate_bit = CPU_INTERRUPT_SERROR;
> +        break;
>      default:
>          g_assert_not_reached();
>      }
> @@ -714,7 +752,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> irq, int level)
>      } else {
>          env->irq_line_state &= ~linestate_bit;
>      }
> -    kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, 
> !!level);
> +
> +    if (inject_irq) {

You could just have (linestate_bit != CPU_INTERRUPT_SERROR) here, and 
not have
inject_irq at all.

> +        kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, 
> !!level);
> +    } else if (level) {

Is there any case where you'd call this function with a SError and level 
== 0?
And even if it happens, you could exit early in the above switch 
statement.

> +        memset(&events, 0, sizeof(events));
> +        events.exception.serror_pending = 1;
> +        kvm_vcpu_ioctl(cs, KVM_SET_VCPU_EVENTS, &events);
> +    }
>  #endif
>  }
> 
> @@ -1064,9 +1109,9 @@ static void arm_cpu_initfn(Object *obj)
>          /* VIRQ and VFIQ are unused with KVM but we add them to 
> maintain
>           * the same interface as non-KVM CPUs.
>           */
> -        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
> +        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 
> ARM_CPU_NUM_IRQ);
>      } else {
> -        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
> +        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 
> ARM_CPU_NUM_IRQ);
>      }
> 
>      qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index e943ffe8a9..23e9f7ee2d 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -49,6 +49,7 @@
>  #define EXCP_LAZYFP         20   /* v7M fault during lazy FP stacking 
> */
>  #define EXCP_LSERR          21   /* v8M LSERR SecureFault */
>  #define EXCP_UNALIGNED      22   /* v7M UNALIGNED UsageFault */
> +#define EXCP_SERROR         23   /* SError Interrupt */
>  /* NB: add new EXCP_ defines to the array in arm_log_exception() too 
> */
> 
>  #define ARMV7M_EXCP_RESET   1
> @@ -79,9 +80,10 @@ enum {
>  };
> 
>  /* ARM-specific interrupt pending bits.  */
> -#define CPU_INTERRUPT_FIQ   CPU_INTERRUPT_TGT_EXT_1
> -#define CPU_INTERRUPT_VIRQ  CPU_INTERRUPT_TGT_EXT_2
> -#define CPU_INTERRUPT_VFIQ  CPU_INTERRUPT_TGT_EXT_3
> +#define CPU_INTERRUPT_FIQ     CPU_INTERRUPT_TGT_EXT_1
> +#define CPU_INTERRUPT_VIRQ    CPU_INTERRUPT_TGT_EXT_2
> +#define CPU_INTERRUPT_VFIQ    CPU_INTERRUPT_TGT_EXT_3
> +#define CPU_INTERRUPT_SERROR  CPU_INTERRUPT_TGT_EXT_4
> 
>  /* The usual mapping for an AArch64 system register to its AArch32
>   * counterpart is for the 32 bit world to have access to the lower
> @@ -97,11 +99,13 @@ enum {
>  #define offsetofhigh32(S, M) (offsetof(S, M) + sizeof(uint32_t))
>  #endif
> 
> -/* Meanings of the ARMCPU object's four inbound GPIO lines */
> -#define ARM_CPU_IRQ 0
> -#define ARM_CPU_FIQ 1
> -#define ARM_CPU_VIRQ 2
> -#define ARM_CPU_VFIQ 3
> +/* ARMCPU object's inbound GPIO lines */
> +#define ARM_CPU_IRQ     0
> +#define ARM_CPU_FIQ     1
> +#define ARM_CPU_VIRQ    2
> +#define ARM_CPU_VFIQ    3
> +#define ARM_CPU_SERROR  4
> +#define ARM_CPU_NUM_IRQ 5

This probably should be turned into an enum, given that it is going to
grow again on the following patch.

> 
>  /* ARM-specific extra insn start words:
>   * 1: Conditional execution bits
> diff --git a/target/arm/helper.c b/target/arm/helper.c
> index 366dbcf460..3f00af4c41 100644
> --- a/target/arm/helper.c
> +++ b/target/arm/helper.c
> @@ -1969,6 +1969,12 @@ static uint64_t isr_read(CPUARMState *env,
> const ARMCPRegInfo *ri)
>          }
>      }
> 
> +    if (!allow_virt || !(hcr_el2 & HCR_AMO)) {

nit: It would be nicer to write this as

        if (!(allow_virt && (hcr_el2 & HCR_AMO)))

which fits the current code better, and makes a slightly less ugly
rewrite in the following patch.

> +        if (cs->interrupt_request & CPU_INTERRUPT_SERROR) {
> +            ret |= CPSR_A;
> +        }
> +    }
> +
>      /* External aborts are not possible in QEMU so A bit is always 
> clear */

nit: this comment seems obsolete now.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] target/arm: Support SError injection
  2020-02-18 16:28   ` Marc Zyngier
@ 2020-02-18 23:09     ` Gavin Shan
  2020-02-19  8:03       ` Marc Zyngier
  0 siblings, 1 reply; 7+ messages in thread
From: Gavin Shan @ 2020-02-18 23:09 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: peter.maydell, drjones, jthierry, aik, richard.henderson,
	qemu-devel, eric.auger, qemu-arm, shan.gavin, pbonzini

Hi Marc,

On 2/19/20 3:28 AM, Marc Zyngier wrote:
> On 2020-02-18 02:04, Gavin Shan wrote:
>> This supports SError injection, which will be used by "virt" board to
>> simulating the behavior of NMI injection in next patch. As Peter Maydell
>> suggested, this adds a new interrupt (ARM_CPU_SERROR), which is parallel
>> to CPU_INTERRUPT_HARD. The backend depends on if kvm is enabled or not.
>> kvm_vcpu_ioctl(cpu, KVM_SET_VCPU_EVENTS) is leveraged to inject SError
>> or data abort to guest. When TCG is enabled, the behavior is simulated
>> by injecting SError and data abort to guest.
> 
> s/and/or/ (you can't inject both at the same time).
> 

Absolutely, will be corrected in v5, which will be hold. I hope to receive
comments from Peter and Richard before going to do another respin :)

>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>  target/arm/cpu.c      | 69 +++++++++++++++++++++++++++++++++++--------
>>  target/arm/cpu.h      | 20 ++++++++-----
>>  target/arm/helper.c   | 12 ++++++++
>>  target/arm/m_helper.c |  8 +++++
>>  target/arm/machine.c  |  3 +-
>>  5 files changed, 91 insertions(+), 21 deletions(-)
>>
>> diff --git a/target/arm/cpu.c b/target/arm/cpu.c
>> index de733aceeb..e5750080bc 100644
>> --- a/target/arm/cpu.c
>> +++ b/target/arm/cpu.c
>> @@ -78,7 +78,7 @@ static bool arm_cpu_has_work(CPUState *cs)
>>          && cs->interrupt_request &
>>          (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
>>           | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
>> -         | CPU_INTERRUPT_EXITTB);
>> +         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_EXITTB);
>>  }
>>
>>  void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
>> @@ -449,6 +449,9 @@ static inline bool arm_excp_unmasked(CPUState *cs,
>> unsigned int excp_idx,
>>              return false;
>>          }
>>          return !(env->daif & PSTATE_I);
>> +    case EXCP_SERROR:
>> +       pstate_unmasked = !(env->daif & PSTATE_A);
>> +       break;
> 
> nit: Consider keeping the physical interrupts together, as they are closely
> related.
> 

Sorry, I didn't get the point. Maybe you're suggesting something like below?
If yes, I'm not sure if it's necessary.

     pstate_unmasked = !(env->daif & (PSTATE_A | PSTATE_I));

I think PSTATE_A is enough to mask out SError according to ARMv8 architecture
reference manual (D1.7), as below:

    A, I, F Asynchronous exception mask bits:
    A
       SError interrupt mask bit.
    I
       IRQ interrupt mask bit.
    F
       FIQ interrupt mask bit.

>>      default:
>>          g_assert_not_reached();
>>      }
>> @@ -538,6 +541,15 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
>> interrupt_request)
>>
>>      /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
>>
>> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
>> +        excp_idx = EXCP_SERROR;
>> +        target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
>> +        if (arm_excp_unmasked(cs, excp_idx, target_el,
>> +                              cur_el, secure, hcr_el2)) {
>> +            goto found;
>> +        }
>> +    }
>> +
>>      if (interrupt_request & CPU_INTERRUPT_FIQ) {
>>          excp_idx = EXCP_FIQ;
>>          target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
>> @@ -570,6 +582,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
>> interrupt_request)
>>              goto found;
>>          }
>>      }
>> +
>>      return false;
>>
>>   found:
>> @@ -585,7 +598,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
>> *cs, int interrupt_request)
>>      CPUClass *cc = CPU_GET_CLASS(cs);
>>      ARMCPU *cpu = ARM_CPU(cs);
>>      CPUARMState *env = &cpu->env;
>> -    bool ret = false;
>> +    uint32_t excp_idx;
>>
>>      /* ARMv7-M interrupt masking works differently than -A or -R.
>>       * There is no FIQ/IRQ distinction. Instead of I and F bits
>> @@ -594,13 +607,26 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
>> *cs, int interrupt_request)
>>       * (which depends on state like BASEPRI, FAULTMASK and the
>>       * currently active exception).
>>       */
>> -    if (interrupt_request & CPU_INTERRUPT_HARD
>> -        && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
>> -        cs->exception_index = EXCP_IRQ;
>> -        cc->do_interrupt(cs);
>> -        ret = true;
>> +    if (!armv7m_nvic_can_take_pending_exception(env->nvic)) {
>> +        return false;
>> +    }
>> +
>> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
>> +        excp_idx = EXCP_SERROR;
>> +        goto found;
>> +    }
>> +
>> +    if (interrupt_request & CPU_INTERRUPT_HARD) {
>> +        excp_idx = EXCP_IRQ;
>> +        goto found;
>>      }
>> -    return ret;
>> +
>> +    return false;
>> +
>> +found:
>> +    cs->exception_index = excp_idx;
>> +    cc->do_interrupt(cs);
>> +    return true;
>>  }
>>  #endif
>>
>> @@ -656,7 +682,8 @@ static void arm_cpu_set_irq(void *opaque, int irq,
>> int level)
>>          [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
>>          [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
>>          [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
>> -        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
>> +        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
>> +        [ARM_CPU_SERROR] = CPU_INTERRUPT_SERROR,
>>      };
>>
>>      if (level) {
>> @@ -676,6 +703,7 @@ static void arm_cpu_set_irq(void *opaque, int irq,
>> int level)
>>          break;
>>      case ARM_CPU_IRQ:
>>      case ARM_CPU_FIQ:
>> +    case ARM_CPU_SERROR:
>>          if (level) {
>>              cpu_interrupt(cs, mask[irq]);
>>          } else {
>> @@ -693,8 +721,10 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
>> irq, int level)
>>      ARMCPU *cpu = opaque;
>>      CPUARMState *env = &cpu->env;
>>      CPUState *cs = CPU(cpu);
>> +    struct kvm_vcpu_events events;
>>      uint32_t linestate_bit;
>>      int irq_id;
>> +    bool inject_irq = true;
>>
>>      switch (irq) {
>>      case ARM_CPU_IRQ:
>> @@ -705,6 +735,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
>> irq, int level)
>>          irq_id = KVM_ARM_IRQ_CPU_FIQ;
>>          linestate_bit = CPU_INTERRUPT_FIQ;
>>          break;
>> +    case ARM_CPU_SERROR:
>> +        if (!kvm_has_vcpu_events()) {
>> +            return;
>> +        }
>> +
>> +        inject_irq = false;
>> +        linestate_bit = CPU_INTERRUPT_SERROR;
>> +        break;
>>      default:
>>          g_assert_not_reached();
>>      }
>> @@ -714,7 +752,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
>> irq, int level)
>>      } else {
>>          env->irq_line_state &= ~linestate_bit;
>>      }
>> -    kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
>> +
>> +    if (inject_irq) {
> 
> You could just have (linestate_bit != CPU_INTERRUPT_SERROR) here, and not have
> inject_irq at all.
> 

Sure, will be improved in v5.

>> +        kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
>> +    } else if (level) {
> 
> Is there any case where you'd call this function with a SError and level == 0?
> And even if it happens, you could exit early in the above switch statement.
> 

The combination of SError and level == 0 isn't existing for now, meaning the
SError is always coming with level == 1. We can't exit early in above switch
statement because env->irq_line_state needs to be updated accordingly.

>> +        memset(&events, 0, sizeof(events));
>> +        events.exception.serror_pending = 1;
>> +        kvm_vcpu_ioctl(cs, KVM_SET_VCPU_EVENTS, &events);
>> +    }
>>  #endif
>>  }
>>
>> @@ -1064,9 +1109,9 @@ static void arm_cpu_initfn(Object *obj)
>>          /* VIRQ and VFIQ are unused with KVM but we add them to maintain
>>           * the same interface as non-KVM CPUs.
>>           */
>> -        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
>> +        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, ARM_CPU_NUM_IRQ);
>>      } else {
>> -        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
>> +        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, ARM_CPU_NUM_IRQ);
>>      }
>>
>>      qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
>> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
>> index e943ffe8a9..23e9f7ee2d 100644
>> --- a/target/arm/cpu.h
>> +++ b/target/arm/cpu.h
>> @@ -49,6 +49,7 @@
>>  #define EXCP_LAZYFP         20   /* v7M fault during lazy FP stacking */
>>  #define EXCP_LSERR          21   /* v8M LSERR SecureFault */
>>  #define EXCP_UNALIGNED      22   /* v7M UNALIGNED UsageFault */
>> +#define EXCP_SERROR         23   /* SError Interrupt */
>>  /* NB: add new EXCP_ defines to the array in arm_log_exception() too */
>>
>>  #define ARMV7M_EXCP_RESET   1
>> @@ -79,9 +80,10 @@ enum {
>>  };
>>
>>  /* ARM-specific interrupt pending bits.  */
>> -#define CPU_INTERRUPT_FIQ   CPU_INTERRUPT_TGT_EXT_1
>> -#define CPU_INTERRUPT_VIRQ  CPU_INTERRUPT_TGT_EXT_2
>> -#define CPU_INTERRUPT_VFIQ  CPU_INTERRUPT_TGT_EXT_3
>> +#define CPU_INTERRUPT_FIQ     CPU_INTERRUPT_TGT_EXT_1
>> +#define CPU_INTERRUPT_VIRQ    CPU_INTERRUPT_TGT_EXT_2
>> +#define CPU_INTERRUPT_VFIQ    CPU_INTERRUPT_TGT_EXT_3
>> +#define CPU_INTERRUPT_SERROR  CPU_INTERRUPT_TGT_EXT_4
>>
>>  /* The usual mapping for an AArch64 system register to its AArch32
>>   * counterpart is for the 32 bit world to have access to the lower
>> @@ -97,11 +99,13 @@ enum {
>>  #define offsetofhigh32(S, M) (offsetof(S, M) + sizeof(uint32_t))
>>  #endif
>>
>> -/* Meanings of the ARMCPU object's four inbound GPIO lines */
>> -#define ARM_CPU_IRQ 0
>> -#define ARM_CPU_FIQ 1
>> -#define ARM_CPU_VIRQ 2
>> -#define ARM_CPU_VFIQ 3
>> +/* ARMCPU object's inbound GPIO lines */
>> +#define ARM_CPU_IRQ     0
>> +#define ARM_CPU_FIQ     1
>> +#define ARM_CPU_VIRQ    2
>> +#define ARM_CPU_VFIQ    3
>> +#define ARM_CPU_SERROR  4
>> +#define ARM_CPU_NUM_IRQ 5
> 
> This probably should be turned into an enum, given that it is going to
> grow again on the following patch.
> 

Nice idea, will do in v5.

>>
>>  /* ARM-specific extra insn start words:
>>   * 1: Conditional execution bits
>> diff --git a/target/arm/helper.c b/target/arm/helper.c
>> index 366dbcf460..3f00af4c41 100644
>> --- a/target/arm/helper.c
>> +++ b/target/arm/helper.c
>> @@ -1969,6 +1969,12 @@ static uint64_t isr_read(CPUARMState *env,
>> const ARMCPRegInfo *ri)
>>          }
>>      }
>>
>> +    if (!allow_virt || !(hcr_el2 & HCR_AMO)) {
> 
> nit: It would be nicer to write this as
> 
>         if (!(allow_virt && (hcr_el2 & HCR_AMO)))
> 
> which fits the current code better, and makes a slightly less ugly
> rewrite in the following patch.
> 

Yeah, I had code as suggested, but changed to the code as-is.
Anyway, will change accordingly in v5.

>> +        if (cs->interrupt_request & CPU_INTERRUPT_SERROR) {
>> +            ret |= CPSR_A;
>> +        }
>> +    }
>> +
>>      /* External aborts are not possible in QEMU so A bit is always clear */
> 
> nit: this comment seems obsolete now.
> 

Yep, will be corrected in v5.

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] target/arm: Support SError injection
  2020-02-18 23:09     ` Gavin Shan
@ 2020-02-19  8:03       ` Marc Zyngier
  0 siblings, 0 replies; 7+ messages in thread
From: Marc Zyngier @ 2020-02-19  8:03 UTC (permalink / raw)
  To: Gavin Shan
  Cc: peter.maydell, drjones, jthierry, aik, richard.henderson,
	qemu-devel, eric.auger, qemu-arm, shan.gavin, pbonzini

On Wed, 19 Feb 2020 10:09:39 +1100
Gavin Shan <gshan@redhat.com> wrote:

> Hi Marc,
> 
> On 2/19/20 3:28 AM, Marc Zyngier wrote:
> > On 2020-02-18 02:04, Gavin Shan wrote:  
> >> This supports SError injection, which will be used by "virt" board to
> >> simulating the behavior of NMI injection in next patch. As Peter Maydell
> >> suggested, this adds a new interrupt (ARM_CPU_SERROR), which is parallel
> >> to CPU_INTERRUPT_HARD. The backend depends on if kvm is enabled or not.
> >> kvm_vcpu_ioctl(cpu, KVM_SET_VCPU_EVENTS) is leveraged to inject SError
> >> or data abort to guest. When TCG is enabled, the behavior is simulated
> >> by injecting SError and data abort to guest.  
> > 
> > s/and/or/ (you can't inject both at the same time).
> >   
> 
> Absolutely, will be corrected in v5, which will be hold. I hope to receive
> comments from Peter and Richard before going to do another respin :)

Sure, there is no hurry at all.

> 
> >>
> >> Signed-off-by: Gavin Shan <gshan@redhat.com>
> >> ---
> >>  target/arm/cpu.c      | 69 ++++++++++++++++++++++++++++++++  
> +++--------
> >>  target/arm/cpu.h      | 20 ++++++++-----
> >>  target/arm/helper.c   | 12 ++++++++
> >>  target/arm/m_helper.c |  8 +++++
> >>  target/arm/machine.c  |  3 +-
> >>  5 files changed, 91 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/target/arm/cpu.c b/target/arm/cpu.c
> >> index de733aceeb..e5750080bc 100644
> >> --- a/target/arm/cpu.c
> >> +++ b/target/arm/cpu.c
> >> @@ -78,7 +78,7 @@ static bool arm_cpu_has_work(CPUState *cs)
> >>          && cs->interrupt_request &
> >>          (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
> >>           | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
> >> -         | CPU_INTERRUPT_EXITTB);
> >> +         | CPU_INTERRUPT_SERROR | CPU_INTERRUPT_EXITTB)  
> ;
> >>  }
> >>
> >>  void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *  
> hook,
> >> @@ -449,6 +449,9 @@ static inline bool arm_excp_unmasked(CPUState *cs,
> >> unsigned int excp_idx,
> >>              return false;
> >>          }
> >>          return !(env->daif & PSTATE_I);
> >> +    case EXCP_SERROR:
> >> +       pstate_unmasked = !(env->daif & PSTATE_A);
> >> +       break;  
> > 
> > nit: Consider keeping the physical interrupts together, as they are close  
> ly
> > related.
> >   
> 
> Sorry, I didn't get the point. Maybe you're suggesting something like below
> ?
> If yes, I'm not sure if it's necessary.
> 
>      pstate_unmasked = !(env->daif & (PSTATE_A | PSTATE_I));
> 
> I think PSTATE_A is enough to mask out SError according to ARMv8 architectu
> re
> reference manual (D1.7), as below:
> 
>     A, I, F Asynchronous exception mask bits:
>     A
>        SError interrupt mask bit.
>     I
>        IRQ interrupt mask bit.
>     F
>        FIQ interrupt mask bit.

No, all I'm suggesting is that you keep the cases for IRQ, FIQ and
SError close together in the switch statement, instead of placing
SError after the virtual interrupts. Given that they use the same code
pattern, it makes sense to order them this way. But as I said, this is
a nit, and not something that affects the outcome of this code.

> >>      default:
> >>          g_assert_not_reached();
> >>      }
> >> @@ -538,6 +541,15 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
> >> interrupt_request)
> >>
> >>      /* The prioritization of interrupts is IMPLEMENTATION DEFIN  
> ED. */
> >>
> >> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
> >> +        excp_idx = EXCP_SERROR;
> >> +        target_el = arm_phys_excp_target_el(cs, excp_id  
> x, cur_el, secure);
> >> +        if (arm_excp_unmasked(cs, excp_idx, target_el,
> >> +                         
>        cur_el, secure, hcr_el2)) {
> >> +            goto found;
> >> +        }
> >> +    }
> >> +
> >>      if (interrupt_request & CPU_INTERRUPT_FIQ) {
> >>          excp_idx = EXCP_FIQ;
> >>          target_el = arm_phys_excp_target_el(cs, excp_  
> idx, cur_el, secure);
> >> @@ -570,6 +582,7 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
> >> interrupt_request)
> >>              goto found;
> >>          }
> >>      }
> >> +
> >>      return false;
> >>
> >>   found:
> >> @@ -585,7 +598,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
> >> *cs, int interrupt_request)
> >>      CPUClass *cc = CPU_GET_CLASS(cs);
> >>      ARMCPU *cpu = ARM_CPU(cs);
> >>      CPUARMState *env = &cpu->env;
> >> -    bool ret = false;
> >> +    uint32_t excp_idx;
> >>
> >>      /* ARMv7-M interrupt masking works differently than -A or -  
> R.
> >>       * There is no FIQ/IRQ distinction. Instead of I and F bi  
> ts
> >> @@ -594,13 +607,26 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState
> >> *cs, int interrupt_request)
> >>       * (which depends on state like BASEPRI, FAULTMASK and th  
> e
> >>       * currently active exception).
> >>       */
> >> -    if (interrupt_request & CPU_INTERRUPT_HARD
> >> -        && (armv7m_nvic_can_take_pending_exception(env->n  
> vic))) {
> >> -        cs->exception_index = EXCP_IRQ;
> >> -        cc->do_interrupt(cs);
> >> -        ret = true;
> >> +    if (!armv7m_nvic_can_take_pending_exception(env->nvic)) {
> >> +        return false;
> >> +    }
> >> +
> >> +    if (interrupt_request & CPU_INTERRUPT_SERROR) {
> >> +        excp_idx = EXCP_SERROR;
> >> +        goto found;
> >> +    }
> >> +
> >> +    if (interrupt_request & CPU_INTERRUPT_HARD) {
> >> +        excp_idx = EXCP_IRQ;
> >> +        goto found;
> >>      }
> >> -    return ret;
> >> +
> >> +    return false;
> >> +
> >> +found:
> >> +    cs->exception_index = excp_idx;
> >> +    cc->do_interrupt(cs);
> >> +    return true;
> >>  }
> >>  #endif
> >>
> >> @@ -656,7 +682,8 @@ static void arm_cpu_set_irq(void *opaque, int irq,
> >> int level)
> >>          [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
> >>          [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
> >>          [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
> >> -        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
> >> +        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
> >> +        [ARM_CPU_SERROR] = CPU_INTERRUPT_SERROR,
> >>      };
> >>
> >>      if (level) {
> >> @@ -676,6 +703,7 @@ static void arm_cpu_set_irq(void *opaque, int irq,
> >> int level)
> >>          break;
> >>      case ARM_CPU_IRQ:
> >>      case ARM_CPU_FIQ:
> >> +    case ARM_CPU_SERROR:
> >>          if (level) {
> >>              cpu_interrupt(cs, mask[irq]);
> >>          } else {
> >> @@ -693,8 +721,10 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> >> irq, int level)
> >>      ARMCPU *cpu = opaque;
> >>      CPUARMState *env = &cpu->env;
> >>      CPUState *cs = CPU(cpu);
> >> +    struct kvm_vcpu_events events;
> >>      uint32_t linestate_bit;
> >>      int irq_id;
> >> +    bool inject_irq = true;
> >>
> >>      switch (irq) {
> >>      case ARM_CPU_IRQ:
> >> @@ -705,6 +735,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> >> irq, int level)
> >>          irq_id = KVM_ARM_IRQ_CPU_FIQ;
> >>          linestate_bit = CPU_INTERRUPT_FIQ;
> >>          break;
> >> +    case ARM_CPU_SERROR:
> >> +        if (!kvm_has_vcpu_events()) {
> >> +            return;
> >> +        }
> >> +
> >> +        inject_irq = false;
> >> +        linestate_bit = CPU_INTERRUPT_SERROR;
> >> +        break;
> >>      default:
> >>          g_assert_not_reached();
> >>      }
> >> @@ -714,7 +752,14 @@ static void arm_cpu_kvm_set_irq(void *opaque, int
> >> irq, int level)
> >>      } else {
> >>          env->irq_line_state &= ~linestate_bit;
> >>      }
> >> -    kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level);
> >> +
> >> +    if (inject_irq) {  
> > 
> > You could just have (linestate_bit != CPU_INTERRUPT_SERROR) here, and n  
> ot have
> > inject_irq at all.
> >   
> 
> Sure, will be improved in v5.
> 
> >> +        kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_C  
> PU, irq_id, !!level);
> >> +    } else if (level) {  
> > 
> > Is there any case where you'd call this function with a SError and level == 0?
> > And even if it happens, you could exit early in the above switch statemen  
> t.
> >   
> 
> The combination of SError and level == 0 isn't existing for now,
> meaning the SError is always coming with level == 1. We can't exit
> early in above s witch statement because env->irq_line_state needs to
> be updated accordingly.

I'm not sure level==0 makes much sense. A common implementation of
SError is as an edge interrupt (it is consumed by being handled, and
you can't "retire" it). I'm not familiar enough with QEMU's interrupt
delivery mechanism, but I'd expect SError to differ significantly from
IRQ/FIQ in that respect.

	M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-02-19  8:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-18  2:04 [PATCH v4 0/3] hw/arm/virt: Simulate NMI Injection Gavin Shan
2020-02-18  2:04 ` [PATCH v4 1/3] target/arm: Support SError injection Gavin Shan
2020-02-18 16:28   ` Marc Zyngier
2020-02-18 23:09     ` Gavin Shan
2020-02-19  8:03       ` Marc Zyngier
2020-02-18  2:04 ` [PATCH v4 2/3] target/arm: Support VSError injection Gavin Shan
2020-02-18  2:04 ` [PATCH v4 3/3] hw/arm/virt: Simulate NMI injection Gavin Shan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).