All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH 0/3] xen: async_exception_* cleanup
@ 2020-02-17 11:17 Andrew Cooper
  2020-02-17 11:17 ` [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats() Andrew Cooper
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Andrew Cooper @ 2020-02-17 11:17 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Jan Beulich, Volodymyr Babchuk, Roger Pau Monné

This infrastructure is only compiled for x86, very x86 specific (so of no
interest to other architectures), and very broken.

Amongst other things, MCEs have a higher priority than NMIs, and there is no
such thing as masking an MCE.  In order to address these bugs (which will
completely change the infrastructure), start by moving it all out of common
code.

Andrew Cooper (3):
  x86/nmi: Corrections and improvements to do_nmi_stats()
  xen: Move async_exception_* infrastructure into x86
  xen/x86: Rename and simplify async_event_* infrastructure

 xen/arch/x86/cpu/mcheck/vmce.c     |  2 +-
 xen/arch/x86/cpu/vpmu.c            |  2 +-
 xen/arch/x86/domain.c              | 11 +++++++++++
 xen/arch/x86/domctl.c              |  2 +-
 xen/arch/x86/hvm/irq.c             |  8 ++++----
 xen/arch/x86/hvm/vioapic.c         |  2 +-
 xen/arch/x86/hvm/vlapic.c          |  2 +-
 xen/arch/x86/nmi.c                 | 26 +++++++++++++-------------
 xen/arch/x86/oprofile/nmi_int.c    |  2 +-
 xen/arch/x86/pv/callback.c         |  2 +-
 xen/arch/x86/pv/iret.c             | 14 +++++++-------
 xen/arch/x86/pv/traps.c            |  2 +-
 xen/arch/x86/x86_64/asm-offsets.c  | 10 +++++-----
 xen/arch/x86/x86_64/compat/entry.S | 12 ++++++------
 xen/arch/x86/x86_64/entry.S        | 12 ++++++------
 xen/common/domain.c                | 15 ---------------
 xen/include/asm-x86/domain.h       | 27 +++++++++++++++++----------
 xen/include/xen/sched.h            | 11 -----------
 18 files changed, 77 insertions(+), 85 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats()
  2020-02-17 11:17 [Xen-devel] [PATCH 0/3] xen: async_exception_* cleanup Andrew Cooper
@ 2020-02-17 11:17 ` Andrew Cooper
  2020-02-18 16:07   ` Jan Beulich
  2020-02-17 11:17 ` [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86 Andrew Cooper
  2020-02-17 11:17 ` [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure Andrew Cooper
  2 siblings, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2020-02-17 11:17 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The hardware domain doesn't necessarily have the domid 0.  Render v instead,
adjusting the strings to avoid printing trailing whitespace.

Rename i to cpu, and use separate booleans for pending/masked.  Drop the
unnecessary domain local variable.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/nmi.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index a5c6bdd0ce..638677a5fe 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -587,25 +587,25 @@ static void do_nmi_trigger(unsigned char key)
 
 static void do_nmi_stats(unsigned char key)
 {
-    int i;
-    struct domain *d;
-    struct vcpu *v;
+    const struct vcpu *v;
+    unsigned int cpu;
+    bool pend, mask;
 
     printk("CPU\tNMI\n");
-    for_each_online_cpu ( i )
-        printk("%3d\t%3d\n", i, nmi_count(i));
+    for_each_online_cpu ( cpu )
+        printk("%3d\t%3d\n", cpu, nmi_count(cpu));
 
-    if ( ((d = hardware_domain) == NULL) || (d->vcpu == NULL) ||
-         ((v = d->vcpu[0]) == NULL) )
+    if ( !hardware_domain || !hardware_domain->vcpu ||
+         !(v = hardware_domain->vcpu[0]) )
         return;
 
-    i = v->async_exception_mask & (1 << VCPU_TRAP_NMI);
-    if ( v->nmi_pending || i )
-        printk("dom0 vpu0: NMI %s%s\n",
-               v->nmi_pending ? "pending " : "",
-               i ? "masked " : "");
+    pend = v->nmi_pending;
+    mask = v->async_exception_mask & (1 << VCPU_TRAP_NMI);
+    if ( pend || mask )
+        printk("%pv: NMI%s%s\n",
+               v, pend ? " pending" : "", mask ? " masked" : "");
     else
-        printk("dom0 vcpu0: NMI neither pending nor masked\n");
+        printk("%pv: NMI neither pending nor masked\n", v);
 }
 
 static __init int register_nmi_trigger(void)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86
  2020-02-17 11:17 [Xen-devel] [PATCH 0/3] xen: async_exception_* cleanup Andrew Cooper
  2020-02-17 11:17 ` [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats() Andrew Cooper
@ 2020-02-17 11:17 ` Andrew Cooper
  2020-02-18 16:11   ` Jan Beulich
  2020-02-17 11:17 ` [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure Andrew Cooper
  2 siblings, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2020-02-17 11:17 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Jan Beulich, Volodymyr Babchuk, Roger Pau Monné

The async_exception_{state,mask} infrastructure is implemented in common code,
but is limited to x86 because of the VCPU_TRAP_LAST ifdef-ary.

The internals are very x86 specific (and even then, in need of correction),
and won't be of interest to other architectures.  Move it all into x86
specific code.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c    |  2 +-
 xen/arch/x86/cpu/vpmu.c           |  2 +-
 xen/arch/x86/domain.c             | 12 ++++++++++++
 xen/arch/x86/domctl.c             |  2 +-
 xen/arch/x86/hvm/irq.c            |  8 ++++----
 xen/arch/x86/hvm/vioapic.c        |  2 +-
 xen/arch/x86/hvm/vlapic.c         |  2 +-
 xen/arch/x86/nmi.c                |  4 ++--
 xen/arch/x86/oprofile/nmi_int.c   |  2 +-
 xen/arch/x86/pv/callback.c        |  2 +-
 xen/arch/x86/pv/iret.c            | 13 +++++++------
 xen/arch/x86/pv/traps.c           |  2 +-
 xen/arch/x86/x86_64/asm-offsets.c | 10 +++++-----
 xen/common/domain.c               | 15 ---------------
 xen/include/asm-x86/domain.h      |  8 ++++++++
 xen/include/xen/sched.h           | 11 -----------
 16 files changed, 46 insertions(+), 51 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 4f5de07e01..816ef61ad4 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -412,7 +412,7 @@ int inject_vmce(struct domain *d, int vcpu)
 
         if ( (is_hvm_domain(d) ||
               pv_trap_callback_registered(v, TRAP_machine_check)) &&
-             !test_and_set_bool(v->mce_pending) )
+             !test_and_set_bool(v->arch.mce_pending) )
         {
             mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
             vcpu_kick(v);
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 3c778450ac..e50d478d23 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -329,7 +329,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs)
         vlapic_set_irq(vlapic, vlapic_lvtpc & APIC_VECTOR_MASK, 0);
         break;
     case APIC_MODE_NMI:
-        sampling->nmi_pending = 1;
+        sampling->arch.nmi_pending = true;
         break;
     }
 #endif
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 66150abf4c..fe63c23676 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1246,6 +1246,10 @@ int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 int arch_vcpu_reset(struct vcpu *v)
 {
+    v->arch.async_exception_mask = 0;
+    memset(v->arch.async_exception_state, 0,
+           sizeof(v->arch.async_exception_state));
+
     if ( is_pv_vcpu(v) )
     {
         pv_destroy_gdt(v);
@@ -1264,6 +1268,14 @@ arch_do_vcpu_op(
 
     switch ( cmd )
     {
+    case VCPUOP_send_nmi:
+        if ( !guest_handle_is_null(arg) )
+            return -EINVAL;
+
+        if ( !test_and_set_bool(v->arch.nmi_pending) )
+            vcpu_kick(v);
+        break;
+
     case VCPUOP_register_vcpu_time_memory_area:
     {
         struct vcpu_register_time_memory_area area;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ce76d6d776..ed86762fa6 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -614,7 +614,7 @@ long arch_do_domctl(
         {
         case XEN_DOMCTL_SENDTRIGGER_NMI:
             ret = 0;
-            if ( !test_and_set_bool(v->nmi_pending) )
+            if ( !test_and_set_bool(v->arch.nmi_pending) )
                 vcpu_kick(v);
             break;
 
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c684422b24..dd202aab5a 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -526,10 +526,10 @@ struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
      */
     vlapic_sync_pir_to_irr(v);
 
-    if ( unlikely(v->nmi_pending) )
+    if ( unlikely(v->arch.nmi_pending) )
         return hvm_intack_nmi;
 
-    if ( unlikely(v->mce_pending) )
+    if ( unlikely(v->arch.mce_pending) )
         return hvm_intack_mce;
 
     if ( (plat->irq->callback_via_type == HVMIRQ_callback_vector)
@@ -554,11 +554,11 @@ struct hvm_intack hvm_vcpu_ack_pending_irq(
     switch ( intack.source )
     {
     case hvm_intsrc_nmi:
-        if ( !test_and_clear_bool(v->nmi_pending) )
+        if ( !test_and_clear_bool(v->arch.nmi_pending) )
             intack = hvm_intack_none;
         break;
     case hvm_intsrc_mce:
-        if ( !test_and_clear_bool(v->mce_pending) )
+        if ( !test_and_clear_bool(v->arch.mce_pending) )
             intack = hvm_intack_none;
         break;
     case hvm_intsrc_pic:
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 9aeef32a14..b87facb0e0 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -469,7 +469,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
         for_each_vcpu ( d, v )
             if ( vlapic_match_dest(vcpu_vlapic(v), NULL,
                                    0, dest, dest_mode) &&
-                 !test_and_set_bool(v->nmi_pending) )
+                 !test_and_set_bool(v->arch.nmi_pending) )
                 vcpu_kick(v);
         break;
     }
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index acb9ddf46f..26726a4312 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -355,7 +355,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low)
         break;
 
     case APIC_DM_NMI:
-        if ( !test_and_set_bool(v->nmi_pending) )
+        if ( !test_and_set_bool(v->arch.nmi_pending) )
         {
             bool_t wake = 0;
             domain_lock(v->domain);
diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index 638677a5fe..0390d9b0b4 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -599,8 +599,8 @@ static void do_nmi_stats(unsigned char key)
          !(v = hardware_domain->vcpu[0]) )
         return;
 
-    pend = v->nmi_pending;
-    mask = v->async_exception_mask & (1 << VCPU_TRAP_NMI);
+    pend = v->arch.nmi_pending;
+    mask = v->arch.async_exception_mask & (1 << VCPU_TRAP_NMI);
     if ( pend || mask )
         printk("%pv: NMI%s%s\n",
                v, pend ? " pending" : "", mask ? " masked" : "");
diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 8f97f7522c..2969db47fc 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -93,7 +93,7 @@ static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
 
 	if ( ovf == 2 )
-                current->nmi_pending = 1;
+		current->arch.nmi_pending = true;
 	return 1;
 }
 
diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 1178efddb6..106c16ed01 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -52,7 +52,7 @@ static int register_guest_nmi_callback(unsigned long address)
      * now.
      */
     if ( curr->vcpu_id == 0 && arch_get_nmi_reason(d) != 0 )
-        curr->nmi_pending = 1;
+        curr->arch.nmi_pending = true;
 
     return 0;
 }
diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c
index 16b449ff64..9e34b616f9 100644
--- a/xen/arch/x86/pv/iret.c
+++ b/xen/arch/x86/pv/iret.c
@@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr)
 {
     unsigned int trap;
 
-    if ( !curr->async_exception_mask )
+    if ( !curr->arch.async_exception_mask )
         return;
 
-    if ( !(curr->async_exception_mask & (curr->async_exception_mask - 1)) )
-        trap = __scanbit(curr->async_exception_mask, VCPU_TRAP_NONE);
+    if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) )
+        trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE);
     else
         for ( trap = VCPU_TRAP_NONE + 1; trap <= VCPU_TRAP_LAST; ++trap )
-            if ( (curr->async_exception_mask ^
-                  curr->async_exception_state(trap).old_mask) == (1u << trap) )
+            if ( (curr->arch.async_exception_mask ^
+                  curr->arch.async_exception_state(trap).old_mask) == (1u << trap) )
                 break;
     if ( unlikely(trap > VCPU_TRAP_LAST) )
     {
@@ -44,7 +44,8 @@ static void async_exception_cleanup(struct vcpu *curr)
     }
 
     /* Restore previous asynchronous exception mask. */
-    curr->async_exception_mask = curr->async_exception_state(trap).old_mask;
+    curr->arch.async_exception_mask =
+        curr->arch.async_exception_state(trap).old_mask;
 }
 
 unsigned long do_iret(void)
diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index 950cf25b4a..d97ebf7890 100644
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -176,7 +176,7 @@ int pv_raise_nmi(struct vcpu *v)
 
     if ( cmpxchgptr(v_ptr, NULL, v) )
         return -EBUSY;
-    if ( !test_and_set_bool(v->nmi_pending) )
+    if ( !test_and_set_bool(v->arch.nmi_pending) )
     {
         /* Not safe to wake up a vcpu here */
         raise_softirq(NMI_SOFTIRQ);
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 07d2155bf5..b8e8510439 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -72,11 +72,11 @@ void __dummy__(void)
     OFFSET(VCPU_guest_context_flags, struct vcpu, arch.pv.vgc_flags);
     OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_arch_msrs, struct vcpu, arch.msrs);
-    OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
-    OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
-    OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
-    OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask);
-    OFFSET(VCPU_async_exception_mask, struct vcpu, async_exception_mask);
+    OFFSET(VCPU_nmi_pending, struct vcpu, arch.nmi_pending);
+    OFFSET(VCPU_mce_pending, struct vcpu, arch.mce_pending);
+    OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_state.old_mask);
+    OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_state.old_mask);
+    OFFSET(VCPU_async_exception_mask, struct vcpu, arch.async_exception_mask);
     DEFINE(VCPU_TRAP_NMI, VCPU_TRAP_NMI);
     DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0ae04d5bb9..6ad458fa6b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1199,10 +1199,6 @@ int vcpu_reset(struct vcpu *v)
     v->fpu_initialised = 0;
     v->fpu_dirtied     = 0;
     v->is_initialised  = 0;
-#ifdef VCPU_TRAP_LAST
-    v->async_exception_mask = 0;
-    memset(v->async_exception_state, 0, sizeof(v->async_exception_state));
-#endif
     if ( v->affinity_broken & VCPU_AFFINITY_OVERRIDE )
         vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE);
     if ( v->affinity_broken & VCPU_AFFINITY_WAIT )
@@ -1511,17 +1507,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-#ifdef VCPU_TRAP_NMI
-    case VCPUOP_send_nmi:
-        if ( !guest_handle_is_null(arg) )
-            return -EINVAL;
-
-        if ( !test_and_set_bool(v->nmi_pending) )
-            vcpu_kick(v);
-
-        break;
-#endif
-
     default:
         rc = arch_do_vcpu_op(cmd, v, arg);
         break;
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 1843c76d1a..105adf96eb 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -19,6 +19,7 @@
 #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
 #define is_domain_direct_mapped(d) ((void)(d), 0)
 
+#define VCPU_TRAP_NONE         0
 #define VCPU_TRAP_NMI          1
 #define VCPU_TRAP_MCE          2
 #define VCPU_TRAP_LAST         VCPU_TRAP_MCE
@@ -556,6 +557,13 @@ struct arch_vcpu
 
     struct vpmu_struct vpmu;
 
+    struct {
+        bool    pending;
+        uint8_t old_mask;
+    } async_exception_state[VCPU_TRAP_LAST];
+#define async_exception_state(t) async_exception_state[(t)-1]
+    uint8_t async_exception_mask;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 21b5f4cebd..3a4f43098c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -191,17 +191,6 @@ struct vcpu
     bool             is_urgent;
     /* VCPU must context_switch without scheduling unit. */
     bool             force_context_switch;
-
-#ifdef VCPU_TRAP_LAST
-#define VCPU_TRAP_NONE    0
-    struct {
-        bool             pending;
-        uint8_t          old_mask;
-    }                async_exception_state[VCPU_TRAP_LAST];
-#define async_exception_state(t) async_exception_state[(t)-1]
-    uint8_t          async_exception_mask;
-#endif
-
     /* Require shutdown to be deferred for some asynchronous operation? */
     bool             defer_shutdown;
     /* VCPU is paused following shutdown request (d->is_shutting_down)? */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure
  2020-02-17 11:17 [Xen-devel] [PATCH 0/3] xen: async_exception_* cleanup Andrew Cooper
  2020-02-17 11:17 ` [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats() Andrew Cooper
  2020-02-17 11:17 ` [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86 Andrew Cooper
@ 2020-02-17 11:17 ` Andrew Cooper
  2020-02-18 16:31   ` Jan Beulich
  2 siblings, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2020-02-17 11:17 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The name async_exception isn't appropriate.  NMI isn't an exception at all,
and while MCE is classified as an exception (i.e. can occur at any point), the
mechanics of injecting it behave like other external interrupts.  Rename to
async_event_* which is a little shorter.

Drop VCPU_TRAP_NONE and renumber VCPU_TRAP_* to be 0-based, rather than
1-based, and remove async_exception_state() which hides the fixup internally.
This shifts the bits used in async_event_mask along by one, but doesn't alter
the overall logic.

Drop the {nmi,mce}_{state,pending} defines which obfuscate the data layout.
Instead, use an anonymous union to overlay names on the async_event[] array,
to retain the easy-to-follow v->arch.{nmi,mce}_pending logic.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/domain.c              |  5 ++---
 xen/arch/x86/nmi.c                 |  2 +-
 xen/arch/x86/pv/iret.c             | 15 +++++++--------
 xen/arch/x86/x86_64/asm-offsets.c  |  6 +++---
 xen/arch/x86/x86_64/compat/entry.S | 12 ++++++------
 xen/arch/x86/x86_64/entry.S        | 12 ++++++------
 xen/include/asm-x86/domain.h       | 33 ++++++++++++++++-----------------
 7 files changed, 41 insertions(+), 44 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fe63c23676..7ee6853522 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1246,9 +1246,8 @@ int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 int arch_vcpu_reset(struct vcpu *v)
 {
-    v->arch.async_exception_mask = 0;
-    memset(v->arch.async_exception_state, 0,
-           sizeof(v->arch.async_exception_state));
+    v->arch.async_event_mask = 0;
+    memset(v->arch.async_event, 0, sizeof(v->arch.async_event));
 
     if ( is_pv_vcpu(v) )
     {
diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index 0390d9b0b4..44507cd86b 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -600,7 +600,7 @@ static void do_nmi_stats(unsigned char key)
         return;
 
     pend = v->arch.nmi_pending;
-    mask = v->arch.async_exception_mask & (1 << VCPU_TRAP_NMI);
+    mask = v->arch.async_event_mask & (1 << VCPU_TRAP_NMI);
     if ( pend || mask )
         printk("%pv: NMI%s%s\n",
                v, pend ? " pending" : "", mask ? " masked" : "");
diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c
index 9e34b616f9..27bb39f162 100644
--- a/xen/arch/x86/pv/iret.c
+++ b/xen/arch/x86/pv/iret.c
@@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr)
 {
     unsigned int trap;
 
-    if ( !curr->arch.async_exception_mask )
+    if ( !curr->arch.async_event_mask )
         return;
 
-    if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) )
-        trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE);
+    if ( !(curr->arch.async_event_mask & (curr->arch.async_event_mask - 1)) )
+        trap = __scanbit(curr->arch.async_event_mask, 0);
     else
-        for ( trap = VCPU_TRAP_NONE + 1; trap <= VCPU_TRAP_LAST; ++trap )
-            if ( (curr->arch.async_exception_mask ^
-                  curr->arch.async_exception_state(trap).old_mask) == (1u << trap) )
+        for ( trap = 0; trap <= VCPU_TRAP_LAST; ++trap )
+            if ( (curr->arch.async_event_mask ^
+                  curr->arch.async_event[trap].old_mask) == (1u << trap) )
                 break;
     if ( unlikely(trap > VCPU_TRAP_LAST) )
     {
@@ -44,8 +44,7 @@ static void async_exception_cleanup(struct vcpu *curr)
     }
 
     /* Restore previous asynchronous exception mask. */
-    curr->arch.async_exception_mask =
-        curr->arch.async_exception_state(trap).old_mask;
+    curr->arch.async_event_mask = curr->arch.async_event[trap].old_mask;
 }
 
 unsigned long do_iret(void)
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index b8e8510439..59b62649e2 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -74,9 +74,9 @@ void __dummy__(void)
     OFFSET(VCPU_arch_msrs, struct vcpu, arch.msrs);
     OFFSET(VCPU_nmi_pending, struct vcpu, arch.nmi_pending);
     OFFSET(VCPU_mce_pending, struct vcpu, arch.mce_pending);
-    OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_state.old_mask);
-    OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_state.old_mask);
-    OFFSET(VCPU_async_exception_mask, struct vcpu, arch.async_exception_mask);
+    OFFSET(VCPU_nmi_old_mask, struct vcpu, arch.nmi_old_mask);
+    OFFSET(VCPU_mce_old_mask, struct vcpu, arch.mce_old_mask);
+    OFFSET(VCPU_async_event_mask, struct vcpu, arch.async_event_mask);
     DEFINE(VCPU_TRAP_NMI, VCPU_TRAP_NMI);
     DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 3cd375bd48..17b1153f79 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -84,33 +84,33 @@ compat_process_softirqs:
 	ALIGN
 /* %rbx: struct vcpu */
 compat_process_mce:
-        testb $1 << VCPU_TRAP_MCE,VCPU_async_exception_mask(%rbx)
+        testb $1 << VCPU_TRAP_MCE, VCPU_async_event_mask(%rbx)
         jnz   .Lcompat_test_guest_nmi
         sti
         movb  $0, VCPU_mce_pending(%rbx)
         call  set_guest_machinecheck_trapbounce
         test  %al, %al
         jz    compat_test_all_events
-        movzbl VCPU_async_exception_mask(%rbx),%edx # save mask for the
+        movzbl VCPU_async_event_mask(%rbx), %edx    # save mask for the
         movb %dl,VCPU_mce_old_mask(%rbx)            # iret hypercall
         orl  $1 << VCPU_TRAP_MCE,%edx
-        movb %dl,VCPU_async_exception_mask(%rbx)
+        movb %dl, VCPU_async_event_mask(%rbx)
         jmp   compat_process_trap
 
 	ALIGN
 /* %rbx: struct vcpu */
 compat_process_nmi:
-        testb $1 << VCPU_TRAP_NMI,VCPU_async_exception_mask(%rbx)
+        testb $1 << VCPU_TRAP_NMI, VCPU_async_event_mask(%rbx)
         jnz   compat_test_guest_events
         sti
         movb  $0, VCPU_nmi_pending(%rbx)
         call  set_guest_nmi_trapbounce
         test  %al, %al
         jz    compat_test_all_events
-        movzbl VCPU_async_exception_mask(%rbx),%edx # save mask for the
+        movzbl VCPU_async_event_mask(%rbx), %edx    # save mask for the
         movb %dl,VCPU_nmi_old_mask(%rbx)            # iret hypercall
         orl  $1 << VCPU_TRAP_NMI,%edx
-        movb %dl,VCPU_async_exception_mask(%rbx)
+        movb %dl, VCPU_async_event_mask(%rbx)
         /* FALLTHROUGH */
 compat_process_trap:
         leaq  VCPU_trap_bounce(%rbx),%rdx
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 997c481ecb..7832ff6fda 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -96,33 +96,33 @@ process_softirqs:
         ALIGN
 /* %rbx: struct vcpu */
 process_mce:
-        testb $1 << VCPU_TRAP_MCE, VCPU_async_exception_mask(%rbx)
+        testb $1 << VCPU_TRAP_MCE, VCPU_async_event_mask(%rbx)
         jnz  .Ltest_guest_nmi
         sti
         movb $0, VCPU_mce_pending(%rbx)
         call set_guest_machinecheck_trapbounce
         test %al, %al
         jz   test_all_events
-        movzbl VCPU_async_exception_mask(%rbx), %edx # save mask for the
+        movzbl VCPU_async_event_mask(%rbx), %edx     # save mask for the
         movb %dl, VCPU_mce_old_mask(%rbx)            # iret hypercall
         orl  $1 << VCPU_TRAP_MCE, %edx
-        movb %dl, VCPU_async_exception_mask(%rbx)
+        movb %dl, VCPU_async_event_mask(%rbx)
         jmp  process_trap
 
         ALIGN
 /* %rbx: struct vcpu */
 process_nmi:
-        testb $1 << VCPU_TRAP_NMI, VCPU_async_exception_mask(%rbx)
+        testb $1 << VCPU_TRAP_NMI, VCPU_async_event_mask(%rbx)
         jnz  test_guest_events
         sti
         movb $0, VCPU_nmi_pending(%rbx)
         call set_guest_nmi_trapbounce
         test %al, %al
         jz   test_all_events
-        movzbl VCPU_async_exception_mask(%rbx), %edx # save mask for the
+        movzbl VCPU_async_event_mask(%rbx), %edx     # save mask for the
         movb %dl, VCPU_nmi_old_mask(%rbx)            # iret hypercall
         orl  $1 << VCPU_TRAP_NMI, %edx
-        movb %dl, VCPU_async_exception_mask(%rbx)
+        movb %dl, VCPU_async_event_mask(%rbx)
         /* FALLTHROUGH */
 process_trap:
         leaq VCPU_trap_bounce(%rbx), %rdx
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 105adf96eb..dd056360b4 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -19,17 +19,6 @@
 #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
 #define is_domain_direct_mapped(d) ((void)(d), 0)
 
-#define VCPU_TRAP_NONE         0
-#define VCPU_TRAP_NMI          1
-#define VCPU_TRAP_MCE          2
-#define VCPU_TRAP_LAST         VCPU_TRAP_MCE
-
-#define nmi_state              async_exception_state(VCPU_TRAP_NMI)
-#define mce_state              async_exception_state(VCPU_TRAP_MCE)
-
-#define nmi_pending            nmi_state.pending
-#define mce_pending            mce_state.pending
-
 struct trap_bounce {
     uint32_t      error_code;
     uint8_t       flags; /* TBF_ */
@@ -557,12 +546,22 @@ struct arch_vcpu
 
     struct vpmu_struct vpmu;
 
-    struct {
-        bool    pending;
-        uint8_t old_mask;
-    } async_exception_state[VCPU_TRAP_LAST];
-#define async_exception_state(t) async_exception_state[(t)-1]
-    uint8_t async_exception_mask;
+    union {
+#define VCPU_TRAP_NMI          0
+#define VCPU_TRAP_MCE          1
+#define VCPU_TRAP_LAST         VCPU_TRAP_MCE
+        struct {
+            bool    pending;
+            uint8_t old_mask;
+        } async_event[VCPU_TRAP_LAST + 1];
+        struct {
+            bool    nmi_pending;
+            uint8_t nmi_old_mask;
+            bool    mce_pending;
+            uint8_t mce_old_mask;
+        };
+    };
+    uint8_t async_event_mask;
 
     /* Virtual Machine Extensions */
     union {
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats()
  2020-02-17 11:17 ` [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats() Andrew Cooper
@ 2020-02-18 16:07   ` Jan Beulich
  0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2020-02-18 16:07 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

On 17.02.2020 12:17, Andrew Cooper wrote:
> --- a/xen/arch/x86/nmi.c
> +++ b/xen/arch/x86/nmi.c
> @@ -587,25 +587,25 @@ static void do_nmi_trigger(unsigned char key)
>  
>  static void do_nmi_stats(unsigned char key)
>  {
> -    int i;
> -    struct domain *d;
> -    struct vcpu *v;
> +    const struct vcpu *v;
> +    unsigned int cpu;
> +    bool pend, mask;
>  
>      printk("CPU\tNMI\n");
> -    for_each_online_cpu ( i )
> -        printk("%3d\t%3d\n", i, nmi_count(i));
> +    for_each_online_cpu ( cpu )
> +        printk("%3d\t%3d\n", cpu, nmi_count(cpu));

%3u twice then please. With this
Reviewed-by: Jan Beulich <jbeulich@suse.com>
but I have one more remark:

> -    if ( ((d = hardware_domain) == NULL) || (d->vcpu == NULL) ||
> -         ((v = d->vcpu[0]) == NULL) )
> +    if ( !hardware_domain || !hardware_domain->vcpu ||
> +         !(v = hardware_domain->vcpu[0]) )

Perhaps, just for readability and consistency, use domain_vcpu()
here?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86
  2020-02-17 11:17 ` [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86 Andrew Cooper
@ 2020-02-18 16:11   ` Jan Beulich
  2020-02-18 16:23     ` Andrew Cooper
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2020-02-18 16:11 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

On 17.02.2020 12:17, Andrew Cooper wrote:
> The async_exception_{state,mask} infrastructure is implemented in common code,
> but is limited to x86 because of the VCPU_TRAP_LAST ifdef-ary.
> 
> The internals are very x86 specific (and even then, in need of correction),
> and won't be of interest to other architectures.

Just to explain - at the time it got put there the assumption was
that ia64 might later have wanted to also use this.

>  Move it all into x86
> specific code.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86
  2020-02-18 16:11   ` Jan Beulich
@ 2020-02-18 16:23     ` Andrew Cooper
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Cooper @ 2020-02-18 16:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

On 18/02/2020 16:11, Jan Beulich wrote:
> On 17.02.2020 12:17, Andrew Cooper wrote:
>> The async_exception_{state,mask} infrastructure is implemented in common code,
>> but is limited to x86 because of the VCPU_TRAP_LAST ifdef-ary.
>>
>> The internals are very x86 specific (and even then, in need of correction),
>> and won't be of interest to other architectures.
> Just to explain - at the time it got put there the assumption was
> that ia64 might later have wanted to also use this.

:)

I honestly hope that such an assumption would have turned out to be
false (even if ia64 had stayed a supported architecture), because the
alternative is that ia64 inherited all of x86's bugs and corner cases
with NMI and MCE handling.

>
>>  Move it all into x86
>> specific code.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure
  2020-02-17 11:17 ` [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure Andrew Cooper
@ 2020-02-18 16:31   ` Jan Beulich
  2020-02-20 18:24     ` Andrew Cooper
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2020-02-18 16:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

On 17.02.2020 12:17, Andrew Cooper wrote:
> --- a/xen/arch/x86/pv/iret.c
> +++ b/xen/arch/x86/pv/iret.c
> @@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr)
>  {
>      unsigned int trap;
>  
> -    if ( !curr->arch.async_exception_mask )
> +    if ( !curr->arch.async_event_mask )
>          return;
>  
> -    if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) )
> -        trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE);
> +    if ( !(curr->arch.async_event_mask & (curr->arch.async_event_mask - 1)) )
> +        trap = __scanbit(curr->arch.async_event_mask, 0);

The transformation just by itself is clearly not "no functional
change"; it is together with the prior if(), but it took me a
little to convince myself. I don't recall why VCPU_TRAP_NONE was
used here originally (possibly just because of it being zero),
but I think the latest now it would be better to use
VCPU_TRAP_LAST + 1 instead, as 0 now has an actual meaning.

> @@ -557,12 +546,22 @@ struct arch_vcpu
>  
>      struct vpmu_struct vpmu;
>  
> -    struct {
> -        bool    pending;
> -        uint8_t old_mask;
> -    } async_exception_state[VCPU_TRAP_LAST];
> -#define async_exception_state(t) async_exception_state[(t)-1]
> -    uint8_t async_exception_mask;
> +    union {
> +#define VCPU_TRAP_NMI          0
> +#define VCPU_TRAP_MCE          1
> +#define VCPU_TRAP_LAST         VCPU_TRAP_MCE
> +        struct {
> +            bool    pending;
> +            uint8_t old_mask;
> +        } async_event[VCPU_TRAP_LAST + 1];
> +        struct {
> +            bool    nmi_pending;
> +            uint8_t nmi_old_mask;
> +            bool    mce_pending;
> +            uint8_t mce_old_mask;
> +        };
> +    };

How about

    union {
#define VCPU_TRAP_NMI          0
#define VCPU_TRAP_MCE          1
#define VCPU_TRAP_LAST         VCPU_TRAP_MCE
        struct async_event_state {
            bool    pending;
            uint8_t old_mask;
        } async_event[VCPU_TRAP_LAST + 1];
        struct {
            struct async_event_state nmi;
            struct async_event_state mce;
        };
    };

(structure tag subject to improvement)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure
  2020-02-18 16:31   ` Jan Beulich
@ 2020-02-20 18:24     ` Andrew Cooper
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Cooper @ 2020-02-20 18:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

On 18/02/2020 16:31, Jan Beulich wrote:
> On 17.02.2020 12:17, Andrew Cooper wrote:
>> --- a/xen/arch/x86/pv/iret.c
>> +++ b/xen/arch/x86/pv/iret.c
>> @@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr)
>>  {
>>      unsigned int trap;
>>  
>> -    if ( !curr->arch.async_exception_mask )
>> +    if ( !curr->arch.async_event_mask )
>>          return;
>>  
>> -    if ( !(curr->arch.async_exception_mask & (curr->arch.async_exception_mask - 1)) )
>> -        trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE);
>> +    if ( !(curr->arch.async_event_mask & (curr->arch.async_event_mask - 1)) )
>> +        trap = __scanbit(curr->arch.async_event_mask, 0);
> The transformation just by itself is clearly not "no functional
> change"; it is together with the prior if(), but it took me a
> little to convince myself.

Well... It is no functional change, even if the fact isn't terribly obvious.

> I don't recall why VCPU_TRAP_NONE was
> used here originally (possibly just because of it being zero),
> but I think the latest now it would be better to use
> VCPU_TRAP_LAST + 1 instead, as 0 now has an actual meaning.

Half poking out of context is:

    if ( unlikely(trap > VCPU_TRAP_LAST) )
    {
        ASSERT_UNREACHABLE();
        return;
    }

which would trigger on such an error path.  That said, the following:

    /* Restore previous asynchronous exception mask. */
    curr->arch.async_event_mask = curr->arch.async_event[trap].old_mask;

will just pick the NMI old_mask in the case of something going wrong.


I deliberately made no adjustment here.  I intend to replace it with
something which is correctly, rather than spending time trying to figure
out how some clearly broken logic was intended to "work".

>
>> @@ -557,12 +546,22 @@ struct arch_vcpu
>>  
>>      struct vpmu_struct vpmu;
>>  
>> -    struct {
>> -        bool    pending;
>> -        uint8_t old_mask;
>> -    } async_exception_state[VCPU_TRAP_LAST];
>> -#define async_exception_state(t) async_exception_state[(t)-1]
>> -    uint8_t async_exception_mask;
>> +    union {
>> +#define VCPU_TRAP_NMI          0
>> +#define VCPU_TRAP_MCE          1
>> +#define VCPU_TRAP_LAST         VCPU_TRAP_MCE
>> +        struct {
>> +            bool    pending;
>> +            uint8_t old_mask;
>> +        } async_event[VCPU_TRAP_LAST + 1];
>> +        struct {
>> +            bool    nmi_pending;
>> +            uint8_t nmi_old_mask;
>> +            bool    mce_pending;
>> +            uint8_t mce_old_mask;
>> +        };
>> +    };
> How about
>
>     union {
> #define VCPU_TRAP_NMI          0
> #define VCPU_TRAP_MCE          1
> #define VCPU_TRAP_LAST         VCPU_TRAP_MCE
>         struct async_event_state {
>             bool    pending;
>             uint8_t old_mask;
>         } async_event[VCPU_TRAP_LAST + 1];
>         struct {
>             struct async_event_state nmi;
>             struct async_event_state mce;
>         };
>     };
>
> (structure tag subject to improvement)?

Can do.  I don't expect any of this to survive, but I also don't yet
have a clear idea what form the eventual solution is going to take.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-02-20 18:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-17 11:17 [Xen-devel] [PATCH 0/3] xen: async_exception_* cleanup Andrew Cooper
2020-02-17 11:17 ` [Xen-devel] [PATCH 1/3] x86/nmi: Corrections and improvements to do_nmi_stats() Andrew Cooper
2020-02-18 16:07   ` Jan Beulich
2020-02-17 11:17 ` [Xen-devel] [PATCH 2/3] xen: Move async_exception_* infrastructure into x86 Andrew Cooper
2020-02-18 16:11   ` Jan Beulich
2020-02-18 16:23     ` Andrew Cooper
2020-02-17 11:17 ` [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure Andrew Cooper
2020-02-18 16:31   ` Jan Beulich
2020-02-20 18:24     ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.