All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 00/10] gic and vgic fixes and improvements
@ 2014-07-10 18:12 Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs Stefano Stabellini
                   ` (9 more replies)
  0 siblings, 10 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Ian Campbell, Stefano Stabellini

Hi all,
this patch series is a collection of three previously sent patch series
to fix bugs in the gic/vgic and implement irq migration on arm.
They have been grouped together under Ian's suggestion.

The last versions of the series, sent separately, were:
<alpine.DEB.2.02.1407031546180.11722@kaball.uk.xensource.com>
<alpine.DEB.2.02.1406131216260.13771@kaball.uk.xensource.com>
<alpine.DEB.2.02.1406241906040.19982@kaball.uk.xensource.com>




Stefano Stabellini (10):
      xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs
      xen/arm: move setting GIC_IRQ_GUEST_QUEUED earlier
      xen/arm: inflight irqs during migration
      xen/arm: support irq delivery to vcpu > 0
      xen/arm: physical irq follow virtual irq
      xen: introduce sched_move_irqs
      xen/arm: remove workaround to inject evtchn_irq on irq enable
      xen/arm: take the rank lock before accessing ipriority
      xen: introduce bit access macros for the IRQ line status flags
      xen/arm: make accesses to desc->status flags atomic

 xen/arch/arm/domain.c              |   11 ++-
 xen/arch/arm/domain_build.c        |    3 +
 xen/arch/arm/gic-v2.c              |   19 +++-
 xen/arch/arm/gic.c                 |   38 ++++++--
 xen/arch/arm/irq.c                 |   48 +++++-----
 xen/arch/arm/vgic-v2.c             |   51 ++++++++++-
 xen/arch/arm/vgic.c                |  172 ++++++++++++++++++++++++++++++------
 xen/common/schedule.c              |   12 ++-
 xen/drivers/passthrough/arm/smmu.c |    4 +-
 xen/include/asm-arm/gic.h          |    3 +
 xen/include/asm-arm/irq.h          |    2 +
 xen/include/asm-arm/vgic.h         |   13 ++-
 xen/include/asm-x86/irq.h          |    2 +
 xen/include/xen/irq.h              |   27 ++++--
 14 files changed, 328 insertions(+), 77 deletions(-)

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-11 13:01   ` Julien Grall
  2014-07-10 18:13 ` [PATCH v8 02/10] xen/arm: move setting GIC_IRQ_GUEST_QUEUED earlier Stefano Stabellini
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

vgic_enable_irqs should enable irq delivery to the vcpu specified by
GICD_ITARGETSR, rather than the vcpu that wrote to GICD_ISENABLER.
Similarly vgic_disable_irqs should use the target vcpu specified by
itarget to disable irqs.

itargets can be set to a mask but vgic_get_target_vcpu always returns
the lower vcpu in the mask.

Correctly initialize itargets for SPIs.

Ignore bits in GICD_ITARGETSR corresponding to invalid vcpus.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---

Changes in v8:
- rebase on ab78724fc5628318b172b4344f7280621a151e1b.

Changes in v7:
- add ASSERT to _vgic_get_target_vcpu;
- add comment to vgic_distr_mmio_write.

Changes in v6:
- add assert and bug_on;
- add in-code comments;
- move additional check on itargets writing from the following patch to
this patch;
- sizeof(itargets) instead of 8*sizeof(itargets[0]);
- remove the unneeded cast of &target for find_first_bit.

Changes in v5:
- improve in-code comments;
- use vgic_rank_irq;
- use bit masks to write-ignore GICD_ITARGETSR;
- introduce an version of vgic_get_target_vcpu that doesn't take the
rank lock;
- keep the rank lock while enabling/disabling irqs;
- use find_first_bit instead of find_next_bit;
- check for zero writes to GICD_ITARGETSR.

Changes in v4:
- remove assert that could allow a guest to crash Xen;
- add itargets validation to vgic_distr_mmio_write;
- export vgic_get_target_vcpu.

Changes in v3:
- add assert in get_target_vcpu;
- rename get_target_vcpu to vgic_get_target_vcpu.

Changes in v2:
- refactor the common code in get_target_vcpu;
- unify PPI and SPI paths;
- correctly initialize itargets for SPI;
- use byte_read.
---
 xen/arch/arm/vgic-v2.c    |   24 +++++++++++++++----
 xen/arch/arm/vgic.c       |   56 ++++++++++++++++++++++++++++++++++++++-------
 xen/include/asm-arm/gic.h |    2 ++
 3 files changed, 70 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 2102e43..9629cbe 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -298,12 +298,12 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
         vgic_lock_rank(v, rank);
         tr = rank->ienable;
         rank->ienable |= *r;
-        vgic_unlock_rank(v, rank);
         /* The virtual irq is derived from register offset.
          * The register difference is word difference. So divide by 2(DABT_WORD)
          * to get Virtual irq number */
         vgic_enable_irqs(v, (*r) & (~tr),
                          (gicd_reg - GICD_ISENABLER) >> DABT_WORD);
+        vgic_unlock_rank(v, rank);
         return 1;
 
     case GICD_ICENABLER ... GICD_ICENABLERN:
@@ -313,12 +313,12 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
         vgic_lock_rank(v, rank);
         tr = rank->ienable;
         rank->ienable &= ~*r;
-        vgic_unlock_rank(v, rank);
         /* The virtual irq is derived from register offset.
          * The register difference is word difference. So divide by 2(DABT_WORD)
          * to get  Virtual irq number */
         vgic_disable_irqs(v, (*r) & tr,
                          (gicd_reg - GICD_ICENABLER) >> DABT_WORD);
+        vgic_unlock_rank(v, rank);
         return 1;
 
     case GICD_ISPENDR ... GICD_ISPENDRN:
@@ -359,13 +359,29 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
         rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
         if ( rank == NULL) goto write_ignore;
+        /* 8-bit vcpu mask for this domain */
+        BUG_ON(v->domain->max_vcpus > 8);
+        tr = (1 << v->domain->max_vcpus) - 1;
+        if ( dabt.size == 2 )
+            tr = tr | (tr << 8) | (tr << 16) | (tr << 24);
+        else
+            tr = (tr << (8 * (gicd_reg & 0x3)));
+        tr &= *r;
+        /* ignore zero writes */
+        if ( !tr )
+            goto write_ignore;
+        /* For word reads ignore writes where any single byte is zero */
+        if ( dabt.size == 2 &&
+            !((tr & 0xff) && (tr & (0xff << 8)) &&
+             (tr & (0xff << 16)) && (tr & (0xff << 24))))
+            goto write_ignore;
         vgic_lock_rank(v, rank);
         if ( dabt.size == DABT_WORD )
             rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR,
-                                          DABT_WORD)] = *r;
+                                          DABT_WORD)] = tr;
         else
             vgic_byte_write(&rank->itargets[REG_RANK_INDEX(8,
-                       gicd_reg - GICD_ITARGETSR, DABT_WORD)], *r, gicd_reg);
+                       gicd_reg - GICD_ITARGETSR, DABT_WORD)], tr, gicd_reg);
         vgic_unlock_rank(v, rank);
         return 1;
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1948316..5b0b2da 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -96,7 +96,13 @@ int domain_vgic_init(struct domain *d)
         INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].lr_queue);
     }
     for (i=0; i<DOMAIN_NR_RANKS(d); i++)
+    {
         spin_lock_init(&d->arch.vgic.shared_irqs[i].lock);
+        /* By default deliver to CPU0 */
+        memset(d->arch.vgic.shared_irqs[i].itargets,
+               0x1,
+               sizeof(d->arch.vgic.shared_irqs[i].itargets));
+    }
 
     d->arch.vgic.handler->domain_init(d);
 
@@ -146,6 +152,36 @@ int vcpu_vgic_free(struct vcpu *v)
     return 0;
 }
 
+/* the rank lock is already taken */
+static struct vcpu *_vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
+{
+    unsigned long target;
+    struct vcpu *v_target;
+    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
+    ASSERT(spin_is_locked(&rank->lock));
+
+    target = vgic_byte_read(rank->itargets[(irq%32)/4], 0, irq % 4);
+    /* 1-N SPI should be delivered as pending to all the vcpus in the
+     * mask, but here we just return the first vcpu for simplicity and
+     * because it would be too slow to do otherwise. */
+    target = find_first_bit(&target, 8);
+    ASSERT(target >= 0 && target < v->domain->max_vcpus);
+    v_target = v->domain->vcpu[target];
+    return v_target;
+}
+
+/* takes the rank lock */
+struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
+{
+    struct vcpu *v_target;
+    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
+
+    vgic_lock_rank(v, rank);
+    v_target = _vgic_get_target_vcpu(v, irq);
+    vgic_unlock_rank(v, rank);
+    return v_target;
+}
+
 void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
     const unsigned long mask = r;
@@ -153,12 +189,14 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     unsigned int irq;
     unsigned long flags;
     int i = 0;
+    struct vcpu *v_target;
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
+        v_target = _vgic_get_target_vcpu(v, irq);
+        p = irq_to_pending(v_target, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        gic_remove_from_queues(v, irq);
+        gic_remove_from_queues(v_target, irq);
         if ( p->desc != NULL )
         {
             spin_lock_irqsave(&p->desc->lock, flags);
@@ -176,24 +214,26 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     unsigned int irq;
     unsigned long flags;
     int i = 0;
+    struct vcpu *v_target;
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
+        v_target = _vgic_get_target_vcpu(v, irq);
+        p = irq_to_pending(v_target, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         /* We need to force the first injection of evtchn_irq because
          * evtchn_upcall_pending is already set by common code on vcpu
          * creation. */
-        if ( irq == v->domain->arch.evtchn_irq &&
+        if ( irq == v_target->domain->arch.evtchn_irq &&
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
-            vgic_vcpu_inject_irq(v, irq);
+            vgic_vcpu_inject_irq(v_target, irq);
         else {
             unsigned long flags;
-            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
             if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-                gic_raise_guest_irq(v, irq, p->priority);
-            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+                gic_raise_guest_irq(v_target, irq, p->priority);
+            spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
         }
         if ( p->desc != NULL )
         {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index a0c07bf..6410280 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -320,6 +320,8 @@ struct gic_hw_operations {
 
 void register_gic_ops(const struct gic_hw_operations *ops);
 
+struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
+
 #endif /* __ASSEMBLY__ */
 #endif
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 02/10] xen/arm: move setting GIC_IRQ_GUEST_QUEUED earlier
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 03/10] xen/arm: inflight irqs during migration Stefano Stabellini
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

It makes the code cleaner, especially with the following patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vgic.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 5b0b2da..538e7a1 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -336,13 +336,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
-    if ( !list_empty(&n->inflight) )
-    {
-        set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
-        gic_raise_inflight_irq(v, irq);
-        goto out;
-    }
-
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
@@ -350,10 +343,17 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         return;
     }
 
+    set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
+
+    if ( !list_empty(&n->inflight) )
+    {
+        gic_raise_inflight_irq(v, irq);
+        goto out;
+    }
+ 
     priority = vgic_byte_read(rank->ipriority[REG_RANK_INDEX(8, irq, DABT_WORD)], 0, irq & 0x3);
 
     n->irq = irq;
-    set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
     n->priority = priority;
 
     /* the irq is enabled */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 02/10] xen/arm: move setting GIC_IRQ_GUEST_QUEUED earlier Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-17 12:44   ` Ian Campbell
  2014-07-10 18:13 ` [PATCH v8 04/10] xen/arm: support irq delivery to vcpu > 0 Stefano Stabellini
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

We need to take special care when migrating irqs that are already
inflight from one vcpu to another. See "The effect of changes to an
GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
Controller Architecture Specification.

The main issue from the Xen point of view is that the lr_pending and
inflight lists are per-vcpu. The lock we take to protect them is also
per-vcpu.

In order to avoid issues, if the irq is still lr_pending, we can
immediately move it to the new vcpu for injection.

Otherwise if it is in a GICH_LR register, set a new flag
GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
while the previous one is still inflight (given that we are only dealing
with hardware interrupts here, it just means that its LR hasn't been
cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
one is the old vcpu, we introduce a new field to pending_irq, called
vcpu_migrate_from.
When clearing the LR on the old vcpu, we take special care of injecting
the interrupt into the new vcpu. To do that we need to release the old
vcpu lock before taking the new vcpu lock.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v8:
- rebase on ab78724fc5628318b172b4344f7280621a151e1b;
- rename target to new_target to avoid conflicts.

Changes in v7:
- move the _VPF_down check before setting GIC_IRQ_GUEST_QUEUED;
- fix comments;
- rename trl to target;
- introduce vcpu_migrate_from;
- do not kick new vcpu on MIGRATING, kick the old vcpu instead;
- separate moving GIC_IRQ_GUEST_QUEUED earlier into a different patch.

Changes in v6:
- remove unnecessary casts to (const unsigned long *) to the arguments
of find_first_bit and find_next_bit;
- instroduce a corresponding smb_rmb call;
- deal with migrating an irq that is inflight and still lr_pending;
- replace the dsb with smb_wmb and smb_rmb, use them to ensure the order
of accesses to GIC_IRQ_GUEST_QUEUED and GIC_IRQ_GUEST_MIGRATING;

Changes in v5:
- pass unsigned long to find_next_bit for alignment on aarch64;
- call vgic_get_target_vcpu instead of gic_add_to_lr_pending to add the
irq in the right inflight queue;
- add barrier and bit tests to make sure that vgic_migrate_irq and
gic_update_one_lr can run simultaneously on different cpus without
issues;
- rework the loop to identify the new vcpu when ITARGETSR is written;
- use find_first_bit instead of find_next_bit.
---
 xen/arch/arm/gic.c         |   28 ++++++++++++++++++++--
 xen/arch/arm/vgic-v2.c     |   43 +++++++++++++++++++++++++++-------
 xen/arch/arm/vgic.c        |   56 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/vgic.h |    8 +++++++
 4 files changed, 124 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 256d9cf..a0252f9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -33,6 +33,7 @@
 #include <asm/device.h>
 #include <asm/io.h>
 #include <asm/gic.h>
+#include <asm/vgic.h>
 
 static void gic_restore_pending_irqs(struct vcpu *v);
 
@@ -375,10 +376,33 @@ static void gic_update_one_lr(struct vcpu *v, int i)
         clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = GIC_INVALID_LR;
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) )
+             test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) &&
+             !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
-        else
+        else {
+            int m, q;
             list_del_init(&p->inflight);
+
+            m = test_and_clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
+            p->vcpu_migrate_from = NULL;
+            /* check MIGRATING before QUEUED */
+            smp_rmb();
+            q = test_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
+            if ( m && q )
+            {
+                struct vcpu *v_target;
+
+                /* It is safe to temporarily drop the lock because we
+                 * are finished dealing with this LR. We'll take the
+                 * lock before reading the next. */
+                spin_unlock(&v->arch.vgic.lock);
+                /* vgic_get_target_vcpu takes the rank lock, ensuring
+                 * consistency with other itarget changes. */
+                v_target = vgic_get_target_vcpu(v, irq);
+                vgic_vcpu_inject_irq(v_target, irq);
+                spin_lock(&v->arch.vgic.lock);
+            }
+        }
     }
 }
 
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 9629cbe..5116a99 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -356,34 +356,59 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
         goto write_ignore;
 
     case GICD_ITARGETSR + 8 ... GICD_ITARGETSRN:
+    {
+        /* unsigned long needed for find_next_bit */
+        unsigned long target;
+        int i;
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
         rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
         if ( rank == NULL) goto write_ignore;
         /* 8-bit vcpu mask for this domain */
         BUG_ON(v->domain->max_vcpus > 8);
-        tr = (1 << v->domain->max_vcpus) - 1;
+        target = (1 << v->domain->max_vcpus) - 1;
         if ( dabt.size == 2 )
-            tr = tr | (tr << 8) | (tr << 16) | (tr << 24);
+            target = target | (target << 8) | (target << 16) | (target << 24);
         else
-            tr = (tr << (8 * (gicd_reg & 0x3)));
-        tr &= *r;
+            target = (target << (8 * (gicd_reg & 0x3)));
+        target &= *r;
         /* ignore zero writes */
-        if ( !tr )
+        if ( !target )
             goto write_ignore;
         /* For word reads ignore writes where any single byte is zero */
         if ( dabt.size == 2 &&
-            !((tr & 0xff) && (tr & (0xff << 8)) &&
-             (tr & (0xff << 16)) && (tr & (0xff << 24))))
+            !((target & 0xff) && (target & (0xff << 8)) &&
+             (target & (0xff << 16)) && (target & (0xff << 24))))
             goto write_ignore;
         vgic_lock_rank(v, rank);
+        i = 0;
+        while ( (i = find_next_bit(&target, 32, i)) < 32 )
+        {
+            unsigned int irq, new_target, old_target;
+            unsigned long old_target_mask;
+            struct vcpu *v_target, *v_old;
+
+            new_target = i % 8;
+            old_target_mask = vgic_byte_read(rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR, DABT_WORD)], 0, i/8);
+            old_target = find_first_bit(&old_target_mask, 8);
+
+            if ( new_target != old_target )
+            {
+                irq = gicd_reg + (i / 8);
+                v_target = v->domain->vcpu[new_target];
+                v_old = v->domain->vcpu[old_target];
+                vgic_migrate_irq(v_old, v_target, irq);
+            }
+            i += 8 - new_target;
+        }
         if ( dabt.size == DABT_WORD )
             rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR,
-                                          DABT_WORD)] = tr;
+                                          DABT_WORD)] = target;
         else
             vgic_byte_write(&rank->itargets[REG_RANK_INDEX(8,
-                       gicd_reg - GICD_ITARGETSR, DABT_WORD)], tr, gicd_reg);
+                       gicd_reg - GICD_ITARGETSR, DABT_WORD)], target, gicd_reg);
         vgic_unlock_rank(v, rank);
         return 1;
+    }
 
     case GICD_IPRIORITYR ... GICD_IPRIORITYRN:
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 538e7a1..fbe3293 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -182,6 +182,46 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
     return v_target;
 }
 
+void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
+{
+    unsigned long flags;
+    struct pending_irq *p = irq_to_pending(old, irq);
+
+    /* nothing to do for virtual interrupts */
+    if ( p->desc == NULL )
+        return;
+
+    /* migration already in progress, no need to do anything */
+    if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
+        return;
+
+    spin_lock_irqsave(&old->arch.vgic.lock, flags);
+
+    if ( list_empty(&p->inflight) )
+    {
+        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
+        return;
+    }
+    /* If the IRQ is still lr_pending, re-inject it to the new vcpu */
+    if ( !list_empty(&p->lr_queue) )
+    {
+        list_del_init(&p->lr_queue);
+        list_del_init(&p->inflight);
+        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
+        vgic_vcpu_inject_irq(new, irq);
+        return;
+    }
+    /* if the IRQ is in a GICH_LR register, set GIC_IRQ_GUEST_MIGRATING
+     * and wait for the EOI */
+    if ( !list_empty(&p->inflight) )
+    {
+        p->vcpu_migrate_from = old;
+        set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
+    }
+
+    spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
+}
+
 void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
     const unsigned long mask = r;
@@ -333,6 +373,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
     struct pending_irq *iter, *n = irq_to_pending(v, irq);
     unsigned long flags;
     bool_t running;
+    struct vcpu *vcpu_migrate_from;
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
@@ -344,6 +385,21 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
     }
 
     set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
+    vcpu_migrate_from = n->vcpu_migrate_from;
+    /* update QUEUED before MIGRATING */
+    smp_wmb();
+    if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &n->status) )
+    {
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        /* The old vcpu must have EOIed the SGI but not cleared the LR.
+         * Give it a kick. */
+        running = n->vcpu_migrate_from->is_running;
+        vcpu_unblock(n->vcpu_migrate_from);
+        if ( running )
+            smp_send_event_check_mask(cpumask_of(n->vcpu_migrate_from->processor));
+        return;
+    }
 
     if ( !list_empty(&n->inflight) )
     {
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 19eed7e..c66578b 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -55,17 +55,24 @@ struct pending_irq
      * GIC_IRQ_GUEST_ENABLED: the guest IRQ is enabled at the VGICD
      * level (GICD_ICENABLER/GICD_ISENABLER).
      *
+     * GIC_IRQ_GUEST_MIGRATING: the irq is being migrated to a different
+     * vcpu while it is still inflight and on an GICH_LR register on the
+     * old vcpu.
+     *
      */
 #define GIC_IRQ_GUEST_QUEUED   0
 #define GIC_IRQ_GUEST_ACTIVE   1
 #define GIC_IRQ_GUEST_VISIBLE  2
 #define GIC_IRQ_GUEST_ENABLED  3
+#define GIC_IRQ_GUEST_MIGRATING   4
     unsigned long status;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     int irq;
 #define GIC_INVALID_LR         ~(uint8_t)0
     uint8_t lr;
     uint8_t priority;
+    /* keeps track of the vcpu this irq is currently migrating from */
+    struct vcpu *vcpu_migrate_from;
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
     struct list_head inflight;
@@ -164,6 +171,7 @@ extern int vcpu_vgic_free(struct vcpu *v);
 extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
                        enum gic_sgi_mode irqmode, int virq,
                        unsigned long vcpu_mask);
+extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq);
 #endif /* __ASM_ARM_VGIC_H__ */
 
 /*
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 04/10] xen/arm: support irq delivery to vcpu > 0
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (2 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 03/10] xen/arm: inflight irqs during migration Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 05/10] xen/arm: physical irq follow virtual irq Stefano Stabellini
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Use vgic_get_target_vcpu to retrieve the target vcpu from do_IRQ.
Remove in-code comments about missing implementation of SGI delivery to
vcpus other than 0.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---

Changes in v7:
- improve in-code comment;
- use gic_number_lines in assert.

Changes in v6:
- add in-code comments;
- assert that the guest irq is an SPI.

Changes in v4:
- the mask in gic_route_irq_to_guest is a physical cpu mask, treat it as
such;
- export vgic_get_target_vcpu in a previous patch.
---
 xen/arch/arm/gic.c         |    3 ++-
 xen/arch/arm/irq.c         |    5 +++--
 xen/arch/arm/vgic.c        |   11 +++++++++++
 xen/include/asm-arm/vgic.h |    1 +
 4 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index a0252f9..884661c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -137,7 +137,8 @@ void gic_route_irq_to_guest(struct domain *d, struct irq_desc *desc,
 
     gic_set_irq_properties(desc, cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
 
-    /* TODO: do not assume delivery to vcpu0 */
+    /* Use vcpu0 to retrieve the pending_irq struct. Given that we only
+     * route SPIs to guests, it doesn't make any difference. */
     p = irq_to_pending(d->vcpu[0], desc->irq);
     p->desc = desc;
 }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3a8acbf..49ca467 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -198,8 +198,9 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->status |= IRQ_INPROGRESS;
         desc->arch.eoi_cpu = smp_processor_id();
 
-        /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq);
+        /* the irq cannot be a PPI, we only support delivery of SPIs to
+         * guests */
+        vgic_vcpu_inject_spi(d, irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index fbe3293..5445a7b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -434,6 +434,17 @@ out:
         smp_send_event_check_mask(cpumask_of(v->processor));
 }
 
+void vgic_vcpu_inject_spi(struct domain *d, unsigned int irq)
+{
+    struct vcpu *v;
+
+    /* the IRQ needs to be an SPI */
+    ASSERT(irq >= 32 && irq <= gic_number_lines());
+
+    v = vgic_get_target_vcpu(d->vcpu[0], irq);
+    vgic_vcpu_inject_irq(v, irq);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index c66578b..e961780 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -159,6 +159,7 @@ extern int domain_vgic_init(struct domain *d);
 extern void domain_vgic_free(struct domain *d);
 extern int vcpu_vgic_init(struct vcpu *v);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
+extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 05/10] xen/arm: physical irq follow virtual irq
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (3 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 04/10] xen/arm: support irq delivery to vcpu > 0 Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-11 13:07   ` Julien Grall
  2014-07-10 18:13 ` [PATCH v8 06/10] xen: introduce sched_move_irqs Stefano Stabellini
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Migrate physical irqs to the same physical cpu that is running the vcpu
expected to receive the irqs. That is done when enabling irqs, when the
guest writes to GICD_ITARGETSR and when Xen migrates a vcpu to a
different pcpu.

In case of virq migration, if the virq is inflight and in a GICH_LR
register already, delay migrating the corresponding physical irq until
the virq is EOIed by the guest and the MIGRATING flag has been cleared.
This way we make sure that the pcpu running the old vcpu gets
interrupted with a new irq of the same kind, clearing the GICH_LR sooner.

Introduce a new arch specific function, arch_move_irqs, that is empty on
x86 and implements the vgic irq migration code on ARM.
arch_move_irqs is going to be called by from sched.c.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---

Changes in v7:
- remove checks at the top of gic_irq_set_affinity, add assert instead;
- move irq_set_affinity to irq.c;
- delay setting the affinity of the physical irq when the virq is
MIGRATING until the virq is EOIed by the guest;
- do not set the affinity of MIGRATING irqs from arch_move_irqs.

Changes in v6:
- use vgic_get_target_vcpu instead of _vgic_get_target_vcpu in
arch_move_irqs.

Changes in v5:
- prettify vgic_move_irqs;
- rename vgic_move_irqs to arch_move_irqs;
- introduce helper function irq_set_affinity.
---
 xen/arch/arm/gic-v2.c     |   15 +++++++++++++--
 xen/arch/arm/gic.c        |    1 +
 xen/arch/arm/irq.c        |    6 ++++++
 xen/arch/arm/vgic.c       |   21 +++++++++++++++++++++
 xen/include/asm-arm/gic.h |    1 +
 xen/include/asm-arm/irq.h |    2 ++
 xen/include/asm-x86/irq.h |    2 ++
 7 files changed, 46 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 843f5a1..ae95178 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -572,9 +572,20 @@ static void gicv2_guest_irq_end(struct irq_desc *desc)
     /* Deactivation happens in maintenance interrupt / via GICV */
 }
 
-static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *mask)
+static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask)
 {
-    BUG();
+    unsigned int mask;
+
+    ASSERT(!cpumask_empty(cpu_mask));
+
+    spin_lock(&gicv2.lock);
+
+    mask = gicv2_cpu_mask(cpu_mask);
+
+    /* Set target CPU mask (RAZ/WI on uniprocessor) */
+    writeb_gicd(mask, GICD_ITARGETSR + desc->irq);
+
+    spin_unlock(&gicv2.lock);
 }
 
 /* XXX different for level vs edge */
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 884661c..d7e1882 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -400,6 +400,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
                 /* vgic_get_target_vcpu takes the rank lock, ensuring
                  * consistency with other itarget changes. */
                 v_target = vgic_get_target_vcpu(v, irq);
+                irq_set_affinity(p->desc, cpumask_of(v_target->processor));
                 vgic_vcpu_inject_irq(v_target, irq);
                 spin_lock(&v->arch.vgic.lock);
             }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 49ca467..7150c7a 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -134,6 +134,12 @@ static inline struct domain *irq_get_domain(struct irq_desc *desc)
     return desc->action->dev_id;
 }
 
+void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask)
+{
+    if ( desc != NULL )
+        desc->handler->set_affinity(desc, cpu_mask);
+}
+
 int request_irq(unsigned int irq, unsigned int irqflags,
                 void (*handler)(int, void *, struct cpu_user_regs *),
                 const char *devname, void *dev_id)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 5445a7b..704eaaf 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -199,6 +199,7 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
 
     if ( list_empty(&p->inflight) )
     {
+        irq_set_affinity(p->desc, cpumask_of(new->processor));
         spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
         return;
     }
@@ -207,6 +208,7 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
     {
         list_del_init(&p->lr_queue);
         list_del_init(&p->inflight);
+        irq_set_affinity(p->desc, cpumask_of(new->processor));
         spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
         vgic_vcpu_inject_irq(new, irq);
         return;
@@ -222,6 +224,24 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
     spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
 }
 
+void arch_move_irqs(struct vcpu *v)
+{
+    const cpumask_t *cpu_mask = cpumask_of(v->processor);
+    struct domain *d = v->domain;
+    struct pending_irq *p;
+    struct vcpu *v_target;
+    int i;
+
+    for ( i = 32; i < d->arch.vgic.nr_lines; i++ )
+    {
+        v_target = vgic_get_target_vcpu(v, i);
+        p = irq_to_pending(v_target, i);
+
+        if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
+            irq_set_affinity(p->desc, cpu_mask);
+    }
+}
+
 void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
     const unsigned long mask = r;
@@ -277,6 +297,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         }
         if ( p->desc != NULL )
         {
+            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
             spin_lock_irqsave(&p->desc->lock, flags);
             p->desc->handler->enable(p->desc);
             spin_unlock_irqrestore(&p->desc->lock, flags);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6410280..8d93ccd 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -321,6 +321,7 @@ struct gic_hw_operations {
 void register_gic_ops(const struct gic_hw_operations *ops);
 
 struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
+void arch_move_irqs(struct vcpu *v);
 
 #endif /* __ASSEMBLY__ */
 #endif
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index e567f71..dc282f0 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -48,6 +48,8 @@ int irq_set_spi_type(unsigned int spi, unsigned int type);
 
 int platform_get_irq(const struct dt_device_node *device, int index);
 
+void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask);
+
 #endif /* _ASM_HW_IRQ_H */
 /*
  * Local variables:
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 9066d38..d3c55f3 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -197,4 +197,6 @@ void cleanup_domain_irq_mapping(struct domain *);
 
 bool_t cpu_has_pending_apic_eoi(void);
 
+static inline void arch_move_irqs(struct vcpu *v) { }
+
 #endif /* _ASM_HW_IRQ_H */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 06/10] xen: introduce sched_move_irqs
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (4 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 05/10] xen/arm: physical irq follow virtual irq Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable Stefano Stabellini
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Campbell, Stefano Stabellini, tim, julien.grall, keir.xen, jbeulich

Introduce sched_move_irqs: it calls arch_move_irqs and
evtchn_move_pirqs.
Replace calls to evtchn_move_pirqs with calls to sched_move_irqs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by:  George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: jbeulich@suse.com
CC: tim@xen.org
CC: keir.xen@gmail.com
---
 xen/common/schedule.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index e9eb0bc..8c1561f 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -226,6 +226,12 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     return 0;
 }
 
+static void sched_move_irqs(struct vcpu *v)
+{
+    arch_move_irqs(v);
+    evtchn_move_pirqs(v);
+}
+
 int sched_move_domain(struct domain *d, struct cpupool *c)
 {
     struct vcpu *v;
@@ -301,7 +307,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
         v->sched_priv = vcpu_priv[v->vcpu_id];
         if ( !d->is_dying )
-            evtchn_move_pirqs(v);
+            sched_move_irqs(v);
 
         new_p = cpumask_cycle(new_p, c->cpu_valid);
 
@@ -528,7 +534,7 @@ static void vcpu_migrate(struct vcpu *v)
     spin_unlock_irqrestore(old_lock, flags);
 
     if ( old_cpu != new_cpu )
-        evtchn_move_pirqs(v);
+        sched_move_irqs(v);
 
     /* Wake on new CPU. */
     vcpu_wake(v);
@@ -1251,7 +1257,7 @@ static void schedule(void)
     stop_timer(&prev->periodic_timer);
 
     if ( next_slice.migrated )
-        evtchn_move_pirqs(next);
+        sched_move_irqs(next);
 
     vcpu_periodic_timer_work(next);
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (5 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 06/10] xen: introduce sched_move_irqs Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-11 13:10   ` Julien Grall
  2014-07-17 12:50   ` Ian Campbell
  2014-07-10 18:13 ` [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority Stefano Stabellini
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

evtchn_upcall_pending is already set by common code at vcpu creation,
therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
Currently we do that from vgic_enable_irqs as a workaround.

Do this properly by calling vgic_vcpu_inject_irq in the appropriate
places at vcpu creation time, making sure to call it after the vcpu is
up (_VPF_down has been cleared).

Return an error if arch_set_info_guest is called without VGCF_online
set: at the moment no callers do that but if they did we would fail to
inject the first evtchn_irq interrupt.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- coding style fix;
- add comment;
- return an error if arch_set_info_guest is called without VGCF_online.
---
 xen/arch/arm/domain.c       |   11 +++++++++--
 xen/arch/arm/domain_build.c |    3 +++
 xen/arch/arm/vgic.c         |   18 ++++--------------
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 87902ef..4417a90 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -659,6 +659,9 @@ int arch_set_info_guest(
             return -EINVAL;
     }
 #endif
+    /* we do not support calls to this functions without VGCF_online set */
+    if ( !(ctxt->flags & VGCF_online) )
+        return -EINVAL;
 
     vcpu_regs_user_to_hyp(v, regs);
 
@@ -670,9 +673,13 @@ int arch_set_info_guest(
     v->is_initialised = 1;
 
     if ( ctxt->flags & VGCF_online )
+    {
         clear_bit(_VPF_down, &v->pause_flags);
-    else
-        set_bit(_VPF_down, &v->pause_flags);
+        /* evtchn_upcall_pending is set by common code at vcpu creation,
+         * therefore on ARM we also need to call vgic_vcpu_inject_irq
+         * for it */
+        vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
+    }
 
     return 0;
 }
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 69188a4..ed43a4c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -18,6 +18,7 @@
 #include <asm/psci.h>
 
 #include <asm/gic.h>
+#include <asm/vgic.h>
 #include <xen/irq.h>
 #include "kernel.h"
 
@@ -1378,6 +1379,8 @@ int construct_dom0(struct domain *d)
     }
 #endif
 
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
+
     for ( i = 1, cpu = 0; i < d->max_vcpus; i++ )
     {
         cpu = cpumask_cycle(cpu, &cpu_online_map);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 704eaaf..569a859 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -281,20 +281,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         v_target = _vgic_get_target_vcpu(v, irq);
         p = irq_to_pending(v_target, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        /* We need to force the first injection of evtchn_irq because
-         * evtchn_upcall_pending is already set by common code on vcpu
-         * creation. */
-        if ( irq == v_target->domain->arch.evtchn_irq &&
-             vcpu_info(current, evtchn_upcall_pending) &&
-             list_empty(&p->inflight) )
-            vgic_vcpu_inject_irq(v_target, irq);
-        else {
-            unsigned long flags;
-            spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
-            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-                gic_raise_guest_irq(v_target, irq, p->priority);
-            spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
-        }
+        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
+        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+            gic_raise_guest_irq(v_target, irq, p->priority);
+        spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
         if ( p->desc != NULL )
         {
             irq_set_affinity(p->desc, cpumask_of(v_target->processor));
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (6 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-17 12:51   ` Ian Campbell
  2014-07-10 18:13 ` [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags Stefano Stabellini
  2014-07-10 18:13 ` [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic Stefano Stabellini
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Currently we read ipriority from vgic_vcpu_inject_irq without taking the
rank lock. Fix that by taking the rank lock and reading ipriority at the
beginning of the function.

As vgic_vcpu_inject_irq is called from the irq.c upon receiving an
interrupt, we need to change the implementation of vgic_lock/unlock_rank
to spin_lock_irqsave to make it safe in irq context.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- rebased on ab78724fc5628318b172b4344f7280621a151e1b;
- remove warning on changing priority of active irqs.
---
 xen/arch/arm/vgic-v2.c     |    2 ++
 xen/arch/arm/vgic.c        |    6 +++++-
 xen/include/asm-arm/vgic.h |    4 ++--
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 5116a99..ae31dbf 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -39,6 +39,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
     register_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
+    unsigned long flags;
 
     switch ( gicd_reg )
     {
@@ -269,6 +270,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
     struct vgic_irq_rank *rank;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
     uint32_t tr;
+    unsigned long flags;
 
     switch ( gicd_reg )
     {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 569a859..fc51e87 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -175,6 +175,7 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
 {
     struct vcpu *v_target;
     struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
+    unsigned long flags;
 
     vgic_lock_rank(v, rank);
     v_target = _vgic_get_target_vcpu(v, irq);
@@ -386,6 +387,10 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
     bool_t running;
     struct vcpu *vcpu_migrate_from;
 
+    vgic_lock_rank(v, rank);
+    priority = vgic_byte_read(rank->ipriority[REG_RANK_INDEX(8, irq, DABT_WORD)], 0, irq & 0x3);
+    vgic_unlock_rank(v, rank);
+
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
     /* vcpu offline */
@@ -418,7 +423,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         goto out;
     }
  
-    priority = vgic_byte_read(rank->ipriority[REG_RANK_INDEX(8, irq, DABT_WORD)], 0, irq & 0x3);
 
     n->irq = irq;
     n->priority = priority;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index e961780..5d6a8ad 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -106,8 +106,8 @@ struct vgic_ops {
 #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
 #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
 
-#define vgic_lock_rank(v, r) spin_lock(&(r)->lock)
-#define vgic_unlock_rank(v, r) spin_unlock(&(r)->lock)
+#define vgic_lock_rank(v, r) spin_lock_irqsave(&(r)->lock, flags)
+#define vgic_unlock_rank(v, r) spin_unlock_irqrestore(&(r)->lock, flags)
 
 /*
  * Rank containing GICD_<FOO><n> for GICD_<FOO> with
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (7 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-11 13:15   ` Julien Grall
  2014-07-10 18:13 ` [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic Stefano Stabellini
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Jan Beulich, Stefano Stabellini

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/irq.c                 |    8 ++++----
 xen/drivers/passthrough/arm/smmu.c |    4 ++--
 xen/include/xen/irq.h              |   27 ++++++++++++++++++---------
 3 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 7150c7a..695c69f 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -297,21 +297,21 @@ void release_irq(unsigned int irq, const void *dev_id)
 static int __setup_irq(struct irq_desc *desc, unsigned int irqflags,
                        struct irqaction *new)
 {
-    bool_t shared = !!(irqflags & IRQF_SHARED);
+    bool_t shared = !!(irqflags & IRQ_SHARED);
 
     ASSERT(new != NULL);
 
     /* Sanity checks:
      *  - if the IRQ is marked as shared
-     *  - dev_id is not NULL when IRQF_SHARED is set
+     *  - dev_id is not NULL when IRQ_SHARED is set
      */
-    if ( desc->action != NULL && (!(desc->status & IRQF_SHARED) || !shared) )
+    if ( desc->action != NULL && (!(desc->status & IRQ_SHARED) || !shared) )
         return -EINVAL;
     if ( shared && new->dev_id == NULL )
         return -EINVAL;
 
     if ( shared )
-        desc->status |= IRQF_SHARED;
+        desc->status |= IRQ_SHARED;
 
     new->next = desc->action;
     dsb(ish);
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index f4eb2a2..f9b87d0 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1060,7 +1060,7 @@ arm_smmu_alloc_domain_context(struct domain *d,
         cfg->irptndx = cfg->cbndx;
 
     irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
-    ret = request_irq(irq, IRQF_SHARED, arm_smmu_context_fault,
+    ret = request_irq(irq, IRQ_SHARED, arm_smmu_context_fault,
                       "arm-smmu-context-fault", cfg);
     if ( ret )
     {
@@ -1713,7 +1713,7 @@ static int __init smmu_init(struct dt_device_node *dev,
     for ( i = 0; i < smmu->num_global_irqs; ++i )
     {
         smmu_dbg(smmu, "\t- global IRQ %u\n", smmu->irqs[i]);
-        res = request_irq(smmu->irqs[i], IRQF_SHARED, arm_smmu_global_fault,
+        res = request_irq(smmu->irqs[i], IRQ_SHARED, arm_smmu_global_fault,
                           "arm-smmu global fault", smmu);
         if ( res )
         {
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 40c0f3f..af5b247 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -22,15 +22,24 @@ struct irqaction {
 /*
  * IRQ line status.
  */
-#define IRQ_INPROGRESS    (1u<<0) /* IRQ handler active - do not enter! */
-#define IRQ_DISABLED      (1u<<1) /* IRQ disabled - do not enter! */
-#define IRQ_PENDING       (1u<<2) /* IRQ pending - replay on enable */
-#define IRQ_REPLAY        (1u<<3) /* IRQ has been replayed but not acked yet */
-#define IRQ_GUEST         (1u<<4) /* IRQ is handled by guest OS(es) */
-#define IRQ_MOVE_PENDING  (1u<<5) /* IRQ is migrating to another CPUs */
-#define IRQ_PER_CPU       (1u<<6) /* IRQ is per CPU */
-#define IRQ_GUEST_EOI_PENDING (1u<<7) /* IRQ was disabled, pending a guest EOI */
-#define IRQF_SHARED       (1<<8)  /* IRQ is shared */
+#define _IRQ_INPROGRESS         0 /* IRQ handler active - do not enter! */
+#define _IRQ_DISABLED           1 /* IRQ disabled - do not enter! */
+#define _IRQ_PENDING            2 /* IRQ pending - replay on enable */
+#define _IRQ_REPLAY             3 /* IRQ has been replayed but not acked yet */
+#define _IRQ_GUEST              4 /* IRQ is handled by guest OS(es) */
+#define _IRQ_MOVE_PENDING       5 /* IRQ is migrating to another CPUs */
+#define _IRQ_PER_CPU            6 /* IRQ is per CPU */
+#define _IRQ_GUEST_EOI_PENDING  7 /* IRQ was disabled, pending a guest EOI */
+#define _IRQ_SHARED             8 /* IRQ is shared */
+#define IRQ_INPROGRESS          (1u<<_IRQ_INPROGRESS)
+#define IRQ_DISABLED            (1u<<_IRQ_DISABLED)
+#define IRQ_PENDING             (1u<<_IRQ_PENDING)
+#define IRQ_REPLAY              (1u<<_IRQ_REPLAY)
+#define IRQ_GUEST               (1u<<_IRQ_GUEST)
+#define IRQ_MOVE_PENDING        (1u<<_IRQ_MOVE_PENDING)
+#define IRQ_PER_CPU             (1u<<_IRQ_PER_CPU)
+#define IRQ_GUEST_EOI_PENDING   (1u<<_IRQ_GUEST_EOI_PENDING)
+#define IRQ_SHARED              (1u<<_IRQ_SHARED)
 
 /* Special IRQ numbers. */
 #define AUTO_ASSIGN_IRQ         (-1)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic
  2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
                   ` (8 preceding siblings ...)
  2014-07-10 18:13 ` [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags Stefano Stabellini
@ 2014-07-10 18:13 ` Stefano Stabellini
  2014-07-17 12:52   ` Ian Campbell
  9 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-10 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Using *_bit manipulation functions on desc->status is safe on arm64:
status is an unsigned int but is the first field of a struct that
contains pointers, therefore the alignement of the struct is at least 8
bytes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Changes in v2:
- rebase on ab78724fc5628318b172b4344f7280621a151e1b.
---
 xen/arch/arm/gic-v2.c |    4 ++--
 xen/arch/arm/gic.c    |    6 +++---
 xen/arch/arm/irq.c    |   33 +++++++++++++++++----------------
 3 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index ae95178..30fd32c 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -519,7 +519,7 @@ static void gicv2_irq_enable(struct irq_desc *desc)
     ASSERT(spin_is_locked(&desc->lock));
 
     spin_lock_irqsave(&gicv2.lock, flags);
-    desc->status &= ~IRQ_DISABLED;
+    clear_bit(_IRQ_DISABLED, &desc->status);
     dsb(sy);
     /* Enable routing */
     writel_gicd((1u << (irq % 32)), GICD_ISENABLER + (irq / 32) * 4);
@@ -536,7 +536,7 @@ static void gicv2_irq_disable(struct irq_desc *desc)
     spin_lock_irqsave(&gicv2.lock, flags);
     /* Disable routing */
     writel_gicd(1u << (irq % 32), GICD_ICENABLER + (irq / 32) * 4);
-    desc->status |= IRQ_DISABLED;
+    set_bit(_IRQ_DISABLED, &desc->status);
     spin_unlock_irqrestore(&gicv2.lock, flags);
 }
 
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d7e1882..d7a120f 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -115,7 +115,7 @@ void gic_route_irq_to_xen(struct irq_desc *desc, const cpumask_t *cpu_mask,
 {
     ASSERT(priority <= 0xff);     /* Only 8 bits of priority */
     ASSERT(desc->irq < gic_number_lines());/* Can't route interrupts that don't exist */
-    ASSERT(desc->status & IRQ_DISABLED);
+    ASSERT(test_bit(_IRQ_DISABLED, &desc->status));
     ASSERT(spin_is_locked(&desc->lock));
 
     desc->handler = gic_hw_ops->gic_host_irq_type;
@@ -133,7 +133,7 @@ void gic_route_irq_to_guest(struct domain *d, struct irq_desc *desc,
     ASSERT(spin_is_locked(&desc->lock));
 
     desc->handler = gic_hw_ops->gic_guest_irq_type;
-    desc->status |= IRQ_GUEST;
+    set_bit(_IRQ_GUEST, &desc->status);
 
     gic_set_irq_properties(desc, cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
 
@@ -369,7 +369,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
 
         if ( p->desc != NULL )
         {
-            p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(_IRQ_INPROGRESS, &p->desc->status);
             if ( platform_has_quirk(PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI) )
                 gic_hw_ops->deactivate_irq(p->desc);
         }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 695c69f..0c6f653 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -126,7 +126,7 @@ static inline struct domain *irq_get_domain(struct irq_desc *desc)
 {
     ASSERT(spin_is_locked(&desc->lock));
 
-    if ( !(desc->status & IRQ_GUEST) )
+    if ( !test_bit(_IRQ_GUEST, &desc->status) )
         return dom_xen;
 
     ASSERT(desc->action != NULL);
@@ -195,13 +195,13 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         goto out;
     }
 
-    if ( desc->status & IRQ_GUEST )
+    if ( test_bit(_IRQ_GUEST, &desc->status) )
     {
         struct domain *d = irq_get_domain(desc);
 
         desc->handler->end(desc);
 
-        desc->status |= IRQ_INPROGRESS;
+        set_bit(_IRQ_INPROGRESS, &desc->status);
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* the irq cannot be a PPI, we only support delivery of SPIs to
@@ -210,22 +210,23 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         goto out_no_end;
     }
 
-    desc->status |= IRQ_PENDING;
+    set_bit(_IRQ_PENDING, &desc->status);
 
     /*
      * Since we set PENDING, if another processor is handling a different
      * instance of this same irq, the other processor will take care of it.
      */
-    if ( desc->status & (IRQ_DISABLED | IRQ_INPROGRESS) )
+    if ( test_bit(_IRQ_DISABLED, &desc->status) ||
+         test_bit(_IRQ_INPROGRESS, &desc->status) )
         goto out;
 
-    desc->status |= IRQ_INPROGRESS;
+    set_bit(_IRQ_INPROGRESS, &desc->status);
 
-    while ( desc->status & IRQ_PENDING )
+    while ( test_bit(_IRQ_PENDING, &desc->status) )
     {
         struct irqaction *action;
 
-        desc->status &= ~IRQ_PENDING;
+        clear_bit(_IRQ_PENDING, &desc->status);
         action = desc->action;
 
         spin_unlock_irq(&desc->lock);
@@ -239,7 +240,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         spin_lock_irq(&desc->lock);
     }
 
-    desc->status &= ~IRQ_INPROGRESS;
+    clear_bit(_IRQ_INPROGRESS, &desc->status);
 
 out:
     desc->handler->end(desc);
@@ -282,13 +283,13 @@ void release_irq(unsigned int irq, const void *dev_id)
     if ( !desc->action )
     {
         desc->handler->shutdown(desc);
-        desc->status &= ~IRQ_GUEST;
+        clear_bit(_IRQ_GUEST, &desc->status);
     }
 
     spin_unlock_irqrestore(&desc->lock,flags);
 
     /* Wait to make sure it's not being used on another CPU */
-    do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
+    do { smp_mb(); } while ( test_bit(_IRQ_INPROGRESS, &desc->status) );
 
     if ( action->free_on_release )
         xfree(action);
@@ -305,13 +306,13 @@ static int __setup_irq(struct irq_desc *desc, unsigned int irqflags,
      *  - if the IRQ is marked as shared
      *  - dev_id is not NULL when IRQ_SHARED is set
      */
-    if ( desc->action != NULL && (!(desc->status & IRQ_SHARED) || !shared) )
+    if ( desc->action != NULL && (!test_bit(_IRQ_SHARED, &desc->status) || !shared) )
         return -EINVAL;
     if ( shared && new->dev_id == NULL )
         return -EINVAL;
 
     if ( shared )
-        desc->status |= IRQ_SHARED;
+        set_bit(IRQ_SHARED, &desc->status);
 
     new->next = desc->action;
     dsb(ish);
@@ -332,7 +333,7 @@ int setup_irq(unsigned int irq, unsigned int irqflags, struct irqaction *new)
 
     spin_lock_irqsave(&desc->lock, flags);
 
-    if ( desc->status & IRQ_GUEST )
+    if ( test_bit(_IRQ_GUEST, &desc->status) )
     {
         struct domain *d = irq_get_domain(desc);
 
@@ -396,10 +397,10 @@ int route_irq_to_guest(struct domain *d, unsigned int irq,
     {
         struct domain *ad = irq_get_domain(desc);
 
-        if ( (desc->status & IRQ_GUEST) && d == ad )
+        if ( test_bit(_IRQ_GUEST, &desc->status) && d == ad )
             goto out;
 
-        if ( desc->status & IRQ_GUEST )
+        if ( test_bit(_IRQ_GUEST, &desc->status) )
             printk(XENLOG_ERR "ERROR: IRQ %u is already used by domain %u\n",
                    irq, ad->domain_id);
         else
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs
  2014-07-10 18:13 ` [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs Stefano Stabellini
@ 2014-07-11 13:01   ` Julien Grall
  2014-07-23 15:31     ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Julien Grall @ 2014-07-11 13:01 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: julien.grall, Ian.Campbell

Hi Stefano,

On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> +/* the rank lock is already taken */
> +static struct vcpu *_vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
> +{
> +    unsigned long target;
> +    struct vcpu *v_target;
> +    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
> +    ASSERT(spin_is_locked(&rank->lock));
> +
> +    target = vgic_byte_read(rank->itargets[(irq%32)/4], 0, irq % 4);
> +    /* 1-N SPI should be delivered as pending to all the vcpus in the
> +     * mask, but here we just return the first vcpu for simplicity and
> +     * because it would be too slow to do otherwise. */
> +    target = find_first_bit(&target, 8);
> +    ASSERT(target >= 0 && target < v->domain->max_vcpus);
> +    v_target = v->domain->vcpu[target];
> +    return v_target;
> +}
> +
> +/* takes the rank lock */
> +struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
> +{
> +    struct vcpu *v_target;
> +    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
> +
> +    vgic_lock_rank(v, rank);
> +    v_target = _vgic_get_target_vcpu(v, irq);
> +    vgic_unlock_rank(v, rank);
> +    return v_target;
> +}
> +

itarget is gicv2 specific. GICv3 is using irouter. I don't understand
why vijay change the rank structure in a later patch...

Those 2 functions should be moved in vgic-v2.c. You may also need to add
a callback in the vgic structure.

> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index a0c07bf..6410280 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -320,6 +320,8 @@ struct gic_hw_operations {
>  
>  void register_gic_ops(const struct gic_hw_operations *ops);
>  
> +struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
> +

This should be moved in vgic.h.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 05/10] xen/arm: physical irq follow virtual irq
  2014-07-10 18:13 ` [PATCH v8 05/10] xen/arm: physical irq follow virtual irq Stefano Stabellini
@ 2014-07-11 13:07   ` Julien Grall
  2014-07-23 15:00     ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Julien Grall @ 2014-07-11 13:07 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: julien.grall, Ian.Campbell

On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
>  void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
>      const unsigned long mask = r;
> @@ -277,6 +297,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>          }
>          if ( p->desc != NULL )
>          {
> +            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>              spin_lock_irqsave(&p->desc->lock, flags);
>              p->desc->handler->enable(p->desc);
>              spin_unlock_irqrestore(&p->desc->lock, flags);
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 6410280..8d93ccd 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -321,6 +321,7 @@ struct gic_hw_operations {
>  void register_gic_ops(const struct gic_hw_operations *ops);
>  
>  struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
> +void arch_move_irqs(struct vcpu *v);

I would stay consistent with x86 by moving this declaration in irq.h.

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-10 18:13 ` [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable Stefano Stabellini
@ 2014-07-11 13:10   ` Julien Grall
  2014-07-17 12:50   ` Ian Campbell
  1 sibling, 0 replies; 37+ messages in thread
From: Julien Grall @ 2014-07-11 13:10 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: julien.grall, Ian.Campbell

Hi Stefano,

On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 69188a4..ed43a4c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -18,6 +18,7 @@
>  #include <asm/psci.h>
>  
>  #include <asm/gic.h>
> +#include <asm/vgic.h>
>  #include <xen/irq.h>
>  #include "kernel.h"
>  
> @@ -1378,6 +1379,8 @@ int construct_dom0(struct domain *d)
>      }
>  #endif
>  
> +    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
> 

IHMO, it misses a comment here to explain why we inject an IRQ for the
event channel.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags
  2014-07-10 18:13 ` [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags Stefano Stabellini
@ 2014-07-11 13:15   ` Julien Grall
  2014-07-23 14:52     ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Julien Grall @ 2014-07-11 13:15 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: julien.grall, Ian.Campbell, Jan Beulich

On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 40c0f3f..af5b247 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -22,15 +22,24 @@ struct irqaction {
>  /*
>   * IRQ line status.
>   */
> -#define IRQ_INPROGRESS    (1u<<0) /* IRQ handler active - do not enter! */
> -#define IRQ_DISABLED      (1u<<1) /* IRQ disabled - do not enter! */
> -#define IRQ_PENDING       (1u<<2) /* IRQ pending - replay on enable */
> -#define IRQ_REPLAY        (1u<<3) /* IRQ has been replayed but not acked yet */
> -#define IRQ_GUEST         (1u<<4) /* IRQ is handled by guest OS(es) */
> -#define IRQ_MOVE_PENDING  (1u<<5) /* IRQ is migrating to another CPUs */
> -#define IRQ_PER_CPU       (1u<<6) /* IRQ is per CPU */
> -#define IRQ_GUEST_EOI_PENDING (1u<<7) /* IRQ was disabled, pending a guest EOI */
> -#define IRQF_SHARED       (1<<8)  /* IRQ is shared */
> +#define _IRQ_INPROGRESS         0 /* IRQ handler active - do not enter! */
> +#define _IRQ_DISABLED           1 /* IRQ disabled - do not enter! */
> +#define _IRQ_PENDING            2 /* IRQ pending - replay on enable */
> +#define _IRQ_REPLAY             3 /* IRQ has been replayed but not acked yet */
> +#define _IRQ_GUEST              4 /* IRQ is handled by guest OS(es) */
> +#define _IRQ_MOVE_PENDING       5 /* IRQ is migrating to another CPUs */
> +#define _IRQ_PER_CPU            6 /* IRQ is per CPU */
> +#define _IRQ_GUEST_EOI_PENDING  7 /* IRQ was disabled, pending a guest EOI */
> +#define _IRQ_SHARED             8 /* IRQ is shared */
> +#define IRQ_INPROGRESS          (1u<<_IRQ_INPROGRESS)
> +#define IRQ_DISABLED            (1u<<_IRQ_DISABLED)
> +#define IRQ_PENDING             (1u<<_IRQ_PENDING)
> +#define IRQ_REPLAY              (1u<<_IRQ_REPLAY)
> +#define IRQ_GUEST               (1u<<_IRQ_GUEST)
> +#define IRQ_MOVE_PENDING        (1u<<_IRQ_MOVE_PENDING)
> +#define IRQ_PER_CPU             (1u<<_IRQ_PER_CPU)
> +#define IRQ_GUEST_EOI_PENDING   (1u<<_IRQ_GUEST_EOI_PENDING)
> +#define IRQ_SHARED              (1u<<_IRQ_SHARED)

Why did you rename IRQF_SHARED into IRQ_SHARED?

The F was request by Jan Beulich to differentiate input flags for
{setup,request}_irq from IRQ status.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-10 18:13 ` [PATCH v8 03/10] xen/arm: inflight irqs during migration Stefano Stabellini
@ 2014-07-17 12:44   ` Ian Campbell
  2014-07-23 14:45     ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-17 12:44 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> We need to take special care when migrating irqs that are already
> inflight from one vcpu to another. See "The effect of changes to an
> GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> Controller Architecture Specification.
> 
> The main issue from the Xen point of view is that the lr_pending and
> inflight lists are per-vcpu. The lock we take to protect them is also
> per-vcpu.
> 
> In order to avoid issues, if the irq is still lr_pending, we can
> immediately move it to the new vcpu for injection.
> 
> Otherwise if it is in a GICH_LR register, set a new flag
> GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> while the previous one is still inflight (given that we are only dealing
> with hardware interrupts here, it just means that its LR hasn't been
> cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> one is the old vcpu, we introduce a new field to pending_irq, called
> vcpu_migrate_from.
> When clearing the LR on the old vcpu, we take special care of injecting
> the interrupt into the new vcpu. To do that we need to release the old
> vcpu lock before taking the new vcpu lock.

I still think this is an awful lot of complexity and scaffolding for
something which is rare on the scale of things and which could be almost
trivially handled by requesting a maintenance interrupt for one EOI and
completing the move at that point.

In order to avoid a simple maint interrupt you are adding code to the
normal interrupt path and a potential SGI back to another processor (and
I hope I'm misreading this but it looks like an SGI back again to finish
off?). That's got to be way more costly to the first interrupt on the
new VCPU than the cost of a maintenance IRQ on the old one.

I think avoiding maintenance interrupts in general is a worthy goal, but
there are times when they are the most appropriate mechanism.

> @@ -344,6 +385,21 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
>      }
>  
>      set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
> +    vcpu_migrate_from = n->vcpu_migrate_from;
> +    /* update QUEUED before MIGRATING */
> +    smp_wmb();
> +    if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &n->status) )
> +    {
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +
> +        /* The old vcpu must have EOIed the SGI but not cleared the LR.
> +         * Give it a kick. */

You mean SPI I think.

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-10 18:13 ` [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable Stefano Stabellini
  2014-07-11 13:10   ` Julien Grall
@ 2014-07-17 12:50   ` Ian Campbell
  2014-07-23 15:04     ` Stefano Stabellini
  1 sibling, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-17 12:50 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> evtchn_upcall_pending is already set by common code at vcpu creation,
> therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> Currently we do that from vgic_enable_irqs as a workaround.

Perhaps we should gate these on evtchn_upcall_pending then? That would
make it pretty obvious in most places what it was for.

Other than that suggestion and Julien's request for a comment this looks
good to me.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority
  2014-07-10 18:13 ` [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority Stefano Stabellini
@ 2014-07-17 12:51   ` Ian Campbell
  2014-07-23 14:57     ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-17 12:51 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:

> -#define vgic_lock_rank(v, r) spin_lock(&(r)->lock)
> -#define vgic_unlock_rank(v, r) spin_unlock(&(r)->lock)
> +#define vgic_lock_rank(v, r) spin_lock_irqsave(&(r)->lock, flags)
> +#define vgic_unlock_rank(v, r) spin_unlock_irqrestore(&(r)->lock, flags)

Please make flags an explicit macro parameter.

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic
  2014-07-10 18:13 ` [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic Stefano Stabellini
@ 2014-07-17 12:52   ` Ian Campbell
  0 siblings, 0 replies; 37+ messages in thread
From: Ian Campbell @ 2014-07-17 12:52 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> Using *_bit manipulation functions on desc->status is safe on arm64:
> status is an unsigned int but is the first field of a struct that
> contains pointers, therefore the alignement of the struct is at least 8
> bytes.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-17 12:44   ` Ian Campbell
@ 2014-07-23 14:45     ` Stefano Stabellini
  2014-07-23 15:38       ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 14:45 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 17 Jul 2014, Ian Campbell wrote:
> On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > We need to take special care when migrating irqs that are already
> > inflight from one vcpu to another. See "The effect of changes to an
> > GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> > Controller Architecture Specification.
> > 
> > The main issue from the Xen point of view is that the lr_pending and
> > inflight lists are per-vcpu. The lock we take to protect them is also
> > per-vcpu.
> > 
> > In order to avoid issues, if the irq is still lr_pending, we can
> > immediately move it to the new vcpu for injection.
> > 
> > Otherwise if it is in a GICH_LR register, set a new flag
> > GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> > while the previous one is still inflight (given that we are only dealing
> > with hardware interrupts here, it just means that its LR hasn't been
> > cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> > only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> > one is the old vcpu, we introduce a new field to pending_irq, called
> > vcpu_migrate_from.
> > When clearing the LR on the old vcpu, we take special care of injecting
> > the interrupt into the new vcpu. To do that we need to release the old
> > vcpu lock before taking the new vcpu lock.
> 
> I still think this is an awful lot of complexity and scaffolding for
> something which is rare on the scale of things and which could be almost
> trivially handled by requesting a maintenance interrupt for one EOI and
> completing the move at that point.

Requesting a maintenance interrupt is not as simple as it looks:
- ATM we don't know how to edit a living GICH_LR register, we would have
to add a function for that;
- if we request a maintenance interrupt then we also need to EOI the
physical IRQ, that is something that we don't do anymore (unless
PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI but that is another matter). We would
need to understand that some physical irqs need to be EOI'ed by Xen and
some don't.

Also requesting a maintenance interrupt would only guarantee that the
vcpu is interrupted as soon as possible, but it won't save us from
having to introduce GIC_IRQ_GUEST_MIGRATING. It would only let us skip
adding vcpu_migrate_from and the 5 lines of code in
vgic_vcpu_inject_irq.

Overall I thought that this approach would be easier.


> In order to avoid a simple maint interrupt you are adding code to the
> normal interrupt path and a potential SGI back to another processor (and
> I hope I'm misreading this but it looks like an SGI back again to finish
> off?). That's got to be way more costly to the first interrupt on the
> new VCPU than the cost of a maintenance IRQ on the old one.
> 
> I think avoiding maintenance interrupts in general is a worthy goal, but
> there are times when they are the most appropriate mechanism.

To be clear the case we are talking about is when the guest kernel wants
to migrate an interrupt that is currently inflight in a GICH_LR register.

Requesting a maintenance interrupt for it would only make sure that the
old vcpu is interrupted soon after the EOI. Without it, we need to
identify which one is the old vcpu (in case of 2 consequent migrations),
I introduced vcpu_migrate_from for that, and kick it when receiving the
second interrupt if the first is still inflight. Exactly and only the
few lines of code you quoted below.

It is one SGI more in the uncommon case when we receive a second
physical interrupt without the old vcpu being interrupted yet.  In the
vast majority of cases the old vcpu has already been interrupted by
something else or by the second irq itself (we haven't changed affinity
yet) and there is no need for the additional SGI.


> > @@ -344,6 +385,21 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
> >      }
> >  
> >      set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
> > +    vcpu_migrate_from = n->vcpu_migrate_from;
> > +    /* update QUEUED before MIGRATING */
> > +    smp_wmb();
> > +    if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &n->status) )
> > +    {
> > +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +
> > +        /* The old vcpu must have EOIed the SGI but not cleared the LR.
> > +         * Give it a kick. */
> 
> You mean SPI I think.

Yes, you are right

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags
  2014-07-11 13:15   ` Julien Grall
@ 2014-07-23 14:52     ` Stefano Stabellini
  0 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 14:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: julien.grall, xen-devel, Ian.Campbell, Jan Beulich, Stefano Stabellini

On Fri, 11 Jul 2014, Julien Grall wrote:
> On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> > diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> > index 40c0f3f..af5b247 100644
> > --- a/xen/include/xen/irq.h
> > +++ b/xen/include/xen/irq.h
> > @@ -22,15 +22,24 @@ struct irqaction {
> >  /*
> >   * IRQ line status.
> >   */
> > -#define IRQ_INPROGRESS    (1u<<0) /* IRQ handler active - do not enter! */
> > -#define IRQ_DISABLED      (1u<<1) /* IRQ disabled - do not enter! */
> > -#define IRQ_PENDING       (1u<<2) /* IRQ pending - replay on enable */
> > -#define IRQ_REPLAY        (1u<<3) /* IRQ has been replayed but not acked yet */
> > -#define IRQ_GUEST         (1u<<4) /* IRQ is handled by guest OS(es) */
> > -#define IRQ_MOVE_PENDING  (1u<<5) /* IRQ is migrating to another CPUs */
> > -#define IRQ_PER_CPU       (1u<<6) /* IRQ is per CPU */
> > -#define IRQ_GUEST_EOI_PENDING (1u<<7) /* IRQ was disabled, pending a guest EOI */
> > -#define IRQF_SHARED       (1<<8)  /* IRQ is shared */
> > +#define _IRQ_INPROGRESS         0 /* IRQ handler active - do not enter! */
> > +#define _IRQ_DISABLED           1 /* IRQ disabled - do not enter! */
> > +#define _IRQ_PENDING            2 /* IRQ pending - replay on enable */
> > +#define _IRQ_REPLAY             3 /* IRQ has been replayed but not acked yet */
> > +#define _IRQ_GUEST              4 /* IRQ is handled by guest OS(es) */
> > +#define _IRQ_MOVE_PENDING       5 /* IRQ is migrating to another CPUs */
> > +#define _IRQ_PER_CPU            6 /* IRQ is per CPU */
> > +#define _IRQ_GUEST_EOI_PENDING  7 /* IRQ was disabled, pending a guest EOI */
> > +#define _IRQ_SHARED             8 /* IRQ is shared */
> > +#define IRQ_INPROGRESS          (1u<<_IRQ_INPROGRESS)
> > +#define IRQ_DISABLED            (1u<<_IRQ_DISABLED)
> > +#define IRQ_PENDING             (1u<<_IRQ_PENDING)
> > +#define IRQ_REPLAY              (1u<<_IRQ_REPLAY)
> > +#define IRQ_GUEST               (1u<<_IRQ_GUEST)
> > +#define IRQ_MOVE_PENDING        (1u<<_IRQ_MOVE_PENDING)
> > +#define IRQ_PER_CPU             (1u<<_IRQ_PER_CPU)
> > +#define IRQ_GUEST_EOI_PENDING   (1u<<_IRQ_GUEST_EOI_PENDING)
> > +#define IRQ_SHARED              (1u<<_IRQ_SHARED)
> 
> Why did you rename IRQF_SHARED into IRQ_SHARED?
> 
> The F was request by Jan Beulich to differentiate input flags for
> {setup,request}_irq from IRQ status.

That was a mistake, I'll keep IRQF_SHARED.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority
  2014-07-17 12:51   ` Ian Campbell
@ 2014-07-23 14:57     ` Stefano Stabellini
  0 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 14:57 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 17 Jul 2014, Ian Campbell wrote:
> On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> 
> > -#define vgic_lock_rank(v, r) spin_lock(&(r)->lock)
> > -#define vgic_unlock_rank(v, r) spin_unlock(&(r)->lock)
> > +#define vgic_lock_rank(v, r) spin_lock_irqsave(&(r)->lock, flags)
> > +#define vgic_unlock_rank(v, r) spin_unlock_irqrestore(&(r)->lock, flags)
> 
> Please make flags an explicit macro parameter.

OK

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 05/10] xen/arm: physical irq follow virtual irq
  2014-07-11 13:07   ` Julien Grall
@ 2014-07-23 15:00     ` Stefano Stabellini
  0 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 15:00 UTC (permalink / raw)
  To: Julien Grall; +Cc: julien.grall, xen-devel, Ian.Campbell, Stefano Stabellini

On Fri, 11 Jul 2014, Julien Grall wrote:
> On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> >  void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> >      const unsigned long mask = r;
> > @@ -277,6 +297,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >          }
> >          if ( p->desc != NULL )
> >          {
> > +            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
> >              spin_lock_irqsave(&p->desc->lock, flags);
> >              p->desc->handler->enable(p->desc);
> >              spin_unlock_irqrestore(&p->desc->lock, flags);
> > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> > index 6410280..8d93ccd 100644
> > --- a/xen/include/asm-arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -321,6 +321,7 @@ struct gic_hw_operations {
> >  void register_gic_ops(const struct gic_hw_operations *ops);
> >  
> >  struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
> > +void arch_move_irqs(struct vcpu *v);
> 
> I would stay consistent with x86 by moving this declaration in irq.h.

OK

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-17 12:50   ` Ian Campbell
@ 2014-07-23 15:04     ` Stefano Stabellini
  2014-07-23 16:09       ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 15:04 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 17 Jul 2014, Ian Campbell wrote:
> On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > evtchn_upcall_pending is already set by common code at vcpu creation,
> > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > Currently we do that from vgic_enable_irqs as a workaround.
> 
> Perhaps we should gate these on evtchn_upcall_pending then? That would
> make it pretty obvious in most places what it was for.
> 
> Other than that suggestion and Julien's request for a comment this looks
> good to me.

Makes sense, I'll make the changes

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs
  2014-07-11 13:01   ` Julien Grall
@ 2014-07-23 15:31     ` Stefano Stabellini
  0 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 15:31 UTC (permalink / raw)
  To: Julien Grall; +Cc: julien.grall, xen-devel, Ian.Campbell, Stefano Stabellini

On Fri, 11 Jul 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/10/2014 07:13 PM, Stefano Stabellini wrote:
> > +/* the rank lock is already taken */
> > +static struct vcpu *_vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
> > +{
> > +    unsigned long target;
> > +    struct vcpu *v_target;
> > +    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
> > +    ASSERT(spin_is_locked(&rank->lock));
> > +
> > +    target = vgic_byte_read(rank->itargets[(irq%32)/4], 0, irq % 4);
> > +    /* 1-N SPI should be delivered as pending to all the vcpus in the
> > +     * mask, but here we just return the first vcpu for simplicity and
> > +     * because it would be too slow to do otherwise. */
> > +    target = find_first_bit(&target, 8);
> > +    ASSERT(target >= 0 && target < v->domain->max_vcpus);
> > +    v_target = v->domain->vcpu[target];
> > +    return v_target;
> > +}
> > +
> > +/* takes the rank lock */
> > +struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
> > +{
> > +    struct vcpu *v_target;
> > +    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
> > +
> > +    vgic_lock_rank(v, rank);
> > +    v_target = _vgic_get_target_vcpu(v, irq);
> > +    vgic_unlock_rank(v, rank);
> > +    return v_target;
> > +}
> > +
> 
> itarget is gicv2 specific. GICv3 is using irouter. I don't understand
> why vijay change the rank structure in a later patch...
> 
> Those 2 functions should be moved in vgic-v2.c. You may also need to add
> a callback in the vgic structure.

Yes, you are right. I think I'll keep vgic_get_target_vcpu here and just
move _vgic_get_target_vcpu.

> > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> > index a0c07bf..6410280 100644
> > --- a/xen/include/asm-arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -320,6 +320,8 @@ struct gic_hw_operations {
> >  
> >  void register_gic_ops(const struct gic_hw_operations *ops);
> >  
> > +struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
> > +
> 
> This should be moved in vgic.h.

OK

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-23 14:45     ` Stefano Stabellini
@ 2014-07-23 15:38       ` Ian Campbell
  2014-07-24 14:48         ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-23 15:38 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-07-23 at 15:45 +0100, Stefano Stabellini wrote:
> On Thu, 17 Jul 2014, Ian Campbell wrote:
> > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > We need to take special care when migrating irqs that are already
> > > inflight from one vcpu to another. See "The effect of changes to an
> > > GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> > > Controller Architecture Specification.
> > > 
> > > The main issue from the Xen point of view is that the lr_pending and
> > > inflight lists are per-vcpu. The lock we take to protect them is also
> > > per-vcpu.
> > > 
> > > In order to avoid issues, if the irq is still lr_pending, we can
> > > immediately move it to the new vcpu for injection.
> > > 
> > > Otherwise if it is in a GICH_LR register, set a new flag
> > > GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> > > while the previous one is still inflight (given that we are only dealing
> > > with hardware interrupts here, it just means that its LR hasn't been
> > > cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> > > only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> > > one is the old vcpu, we introduce a new field to pending_irq, called
> > > vcpu_migrate_from.
> > > When clearing the LR on the old vcpu, we take special care of injecting
> > > the interrupt into the new vcpu. To do that we need to release the old
> > > vcpu lock before taking the new vcpu lock.
> > 
> > I still think this is an awful lot of complexity and scaffolding for
> > something which is rare on the scale of things and which could be almost
> > trivially handled by requesting a maintenance interrupt for one EOI and
> > completing the move at that point.
> 
> Requesting a maintenance interrupt is not as simple as it looks:
> - ATM we don't know how to edit a living GICH_LR register, we would have
> to add a function for that;

That doesn't sound like a great hardship. Perhaps you can reuse the
setter function anyhow.

> - if we request a maintenance interrupt then we also need to EOI the
> physical IRQ, that is something that we don't do anymore (unless
> PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI but that is another matter). We would
> need to understand that some physical irqs need to be EOI'ed by Xen and
> some don't.

I was thinking the maintenance interrupt handler would take care of
this.

> Also requesting a maintenance interrupt would only guarantee that the
> vcpu is interrupted as soon as possible, but it won't save us from
> having to introduce GIC_IRQ_GUEST_MIGRATING.

I didn't expect GIC_IRQ_GUEST_MIGRATING to go away. If nothing else you
would need it to flag to the maintenance IRQ that it needs to EOI
+complete the migration.

>  It would only let us skip
> adding vcpu_migrate_from and the 5 lines of code in
> vgic_vcpu_inject_irq.

And the code in gic_update_one_lr I think, and most of
vgic_vcpu_inject-cpu. And more than the raw lines of code the
*complexity* would be much lower.

I think it could work like this:

On write to ITARGETSR if the interrupt is active then you set MIGRATING
and update the LR to request a maintenance IRQ.

If another interrupt occurs while this one is active then you mark it
pending just like normal (you don't care if it is migrating or not).

In the maintenance irq handler you check the migrating bit, if it is
clear then nothing to do. If it is set then you know the old cpu (it's
current) clear the LR, EOI the interrupt and write the physical
ITARGETSR (in some order, perhaps not that one). Then SGI the new
processor if the interrupt is pending.

If there have been multiple migrations then you don't care, you only
care about the current target right now as you finish it off.

> Overall I thought that this approach would be easier.
> 
> 
> > In order to avoid a simple maint interrupt you are adding code to the
> > normal interrupt path and a potential SGI back to another processor (and
> > I hope I'm misreading this but it looks like an SGI back again to finish
> > off?). That's got to be way more costly to the first interrupt on the
> > new VCPU than the cost of a maintenance IRQ on the old one.
> > 
> > I think avoiding maintenance interrupts in general is a worthy goal, but
> > there are times when they are the most appropriate mechanism.
> 
> To be clear the case we are talking about is when the guest kernel wants
> to migrate an interrupt that is currently inflight in a GICH_LR register.
> 
> Requesting a maintenance interrupt for it would only make sure that the
> old vcpu is interrupted soon after the EOI.

Yes, that's the point though. When you get this notification then you
can finish off the migration pretty much trivially with no worrying
about other inflight interrupt, pending stuff due to lazy handling of LR
cleanup etc, you just update the h/w state and you are done.

>  Without it, we need to
> identify which one is the old vcpu (in case of 2 consequent migrations),
> I introduced vcpu_migrate_from for that, and kick it when receiving the
> second interrupt if the first is still inflight. Exactly and only the
> few lines of code you quoted below.
> 
> It is one SGI more in the uncommon case when we receive a second

Two more I think?

> physical interrupt without the old vcpu being interrupted yet.  In the
> vast majority of cases the old vcpu has already been interrupted by
> something else or by the second irq itself

> (we haven't changed affinity yet)

In patch #5 you will start doing so though, meaning that the extra
SGI(s) will become more frequent, if not the common case for high
frequency IRQs.

But it's not really so much about the SGIs but all the juggling of state
(in the code as well as in our heads) about all the corner cases which
arise due to the lazy clearing of LRs done in the normal case (which I'm
fine with BTW) and ordering of the various bit tests etc. I just think
it could be done in the simplest way possible with no real overhead
because these migration events are in reality rare.

>  and there is no need for the additional SGI.

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-23 15:04     ` Stefano Stabellini
@ 2014-07-23 16:09       ` Stefano Stabellini
  2014-07-23 16:11         ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 16:09 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel, Ian Campbell

On Wed, 23 Jul 2014, Stefano Stabellini wrote:
> On Thu, 17 Jul 2014, Ian Campbell wrote:
> > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > evtchn_upcall_pending is already set by common code at vcpu creation,
> > > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > > Currently we do that from vgic_enable_irqs as a workaround.
> > 
> > Perhaps we should gate these on evtchn_upcall_pending then? That would
> > make it pretty obvious in most places what it was for.
> > 
> > Other than that suggestion and Julien's request for a comment this looks
> > good to me.
> 
> Makes sense, I'll make the changes

I take it back: checking evtchn_upcall_pending wouldn't work because it
hasn't been set yet. I think it's best not to introduce a dependency on
the order of the calls.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-23 16:09       ` Stefano Stabellini
@ 2014-07-23 16:11         ` Ian Campbell
  2014-07-23 16:12           ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-23 16:11 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-07-23 at 17:09 +0100, Stefano Stabellini wrote:
> On Wed, 23 Jul 2014, Stefano Stabellini wrote:
> > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > evtchn_upcall_pending is already set by common code at vcpu creation,
> > > > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > > > Currently we do that from vgic_enable_irqs as a workaround.
> > > 
> > > Perhaps we should gate these on evtchn_upcall_pending then? That would
> > > make it pretty obvious in most places what it was for.
> > > 
> > > Other than that suggestion and Julien's request for a comment this looks
> > > good to me.
> > 
> > Makes sense, I'll make the changes
> 
> I take it back: checking evtchn_upcall_pending wouldn't work because it
> hasn't been set yet. I think it's best not to introduce a dependency on
> the order of the calls.

Then shouldn't whatever is setting evtchn_upcall_pending be doing the
inject?

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-23 16:11         ` Ian Campbell
@ 2014-07-23 16:12           ` Stefano Stabellini
  2014-07-23 16:16             ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-23 16:12 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Wed, 23 Jul 2014, Ian Campbell wrote:
> On Wed, 2014-07-23 at 17:09 +0100, Stefano Stabellini wrote:
> > On Wed, 23 Jul 2014, Stefano Stabellini wrote:
> > > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > > evtchn_upcall_pending is already set by common code at vcpu creation,
> > > > > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > > > > Currently we do that from vgic_enable_irqs as a workaround.
> > > > 
> > > > Perhaps we should gate these on evtchn_upcall_pending then? That would
> > > > make it pretty obvious in most places what it was for.
> > > > 
> > > > Other than that suggestion and Julien's request for a comment this looks
> > > > good to me.
> > > 
> > > Makes sense, I'll make the changes
> > 
> > I take it back: checking evtchn_upcall_pending wouldn't work because it
> > hasn't been set yet. I think it's best not to introduce a dependency on
> > the order of the calls.
> 
> Then shouldn't whatever is setting evtchn_upcall_pending be doing the
> inject?

That is common code, hence the reason for this patch...

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-23 16:12           ` Stefano Stabellini
@ 2014-07-23 16:16             ` Ian Campbell
  2014-07-24 14:37               ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-23 16:16 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-07-23 at 17:12 +0100, Stefano Stabellini wrote:
> On Wed, 23 Jul 2014, Ian Campbell wrote:
> > On Wed, 2014-07-23 at 17:09 +0100, Stefano Stabellini wrote:
> > > On Wed, 23 Jul 2014, Stefano Stabellini wrote:
> > > > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > > > evtchn_upcall_pending is already set by common code at vcpu creation,
> > > > > > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > > > > > Currently we do that from vgic_enable_irqs as a workaround.
> > > > > 
> > > > > Perhaps we should gate these on evtchn_upcall_pending then? That would
> > > > > make it pretty obvious in most places what it was for.
> > > > > 
> > > > > Other than that suggestion and Julien's request for a comment this looks
> > > > > good to me.
> > > > 
> > > > Makes sense, I'll make the changes
> > > 
> > > I take it back: checking evtchn_upcall_pending wouldn't work because it
> > > hasn't been set yet. I think it's best not to introduce a dependency on
> > > the order of the calls.
> > 
> > Then shouldn't whatever is setting evtchn_upcall_pending be doing the
> > inject?
> 
> That is common code, hence the reason for this patch...

Some sort of arch callback at that point doesn't seem unreasonable.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable
  2014-07-23 16:16             ` Ian Campbell
@ 2014-07-24 14:37               ` Stefano Stabellini
  0 siblings, 0 replies; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-24 14:37 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Wed, 23 Jul 2014, Ian Campbell wrote:
> On Wed, 2014-07-23 at 17:12 +0100, Stefano Stabellini wrote:
> > On Wed, 23 Jul 2014, Ian Campbell wrote:
> > > On Wed, 2014-07-23 at 17:09 +0100, Stefano Stabellini wrote:
> > > > On Wed, 23 Jul 2014, Stefano Stabellini wrote:
> > > > > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > > > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > > > > evtchn_upcall_pending is already set by common code at vcpu creation,
> > > > > > > therefore on ARM we also need to call vgic_vcpu_inject_irq for it.
> > > > > > > Currently we do that from vgic_enable_irqs as a workaround.
> > > > > > 
> > > > > > Perhaps we should gate these on evtchn_upcall_pending then? That would
> > > > > > make it pretty obvious in most places what it was for.
> > > > > > 
> > > > > > Other than that suggestion and Julien's request for a comment this looks
> > > > > > good to me.
> > > > > 
> > > > > Makes sense, I'll make the changes
> > > > 
> > > > I take it back: checking evtchn_upcall_pending wouldn't work because it
> > > > hasn't been set yet. I think it's best not to introduce a dependency on
> > > > the order of the calls.
> > > 
> > > Then shouldn't whatever is setting evtchn_upcall_pending be doing the
> > > inject?
> > 
> > That is common code, hence the reason for this patch...
> 
> Some sort of arch callback at that point doesn't seem unreasonable.

All right, that might be better. I'll do that.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-23 15:38       ` Ian Campbell
@ 2014-07-24 14:48         ` Stefano Stabellini
  2014-07-24 16:41           ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-24 14:48 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 4811 bytes --]

On Wed, 23 Jul 2014, Ian Campbell wrote:
> On Wed, 2014-07-23 at 15:45 +0100, Stefano Stabellini wrote:
> > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > We need to take special care when migrating irqs that are already
> > > > inflight from one vcpu to another. See "The effect of changes to an
> > > > GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> > > > Controller Architecture Specification.
> > > > 
> > > > The main issue from the Xen point of view is that the lr_pending and
> > > > inflight lists are per-vcpu. The lock we take to protect them is also
> > > > per-vcpu.
> > > > 
> > > > In order to avoid issues, if the irq is still lr_pending, we can
> > > > immediately move it to the new vcpu for injection.
> > > > 
> > > > Otherwise if it is in a GICH_LR register, set a new flag
> > > > GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> > > > while the previous one is still inflight (given that we are only dealing
> > > > with hardware interrupts here, it just means that its LR hasn't been
> > > > cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> > > > only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> > > > one is the old vcpu, we introduce a new field to pending_irq, called
> > > > vcpu_migrate_from.
> > > > When clearing the LR on the old vcpu, we take special care of injecting
> > > > the interrupt into the new vcpu. To do that we need to release the old
> > > > vcpu lock before taking the new vcpu lock.
> > > 
> > > I still think this is an awful lot of complexity and scaffolding for
> > > something which is rare on the scale of things and which could be almost
> > > trivially handled by requesting a maintenance interrupt for one EOI and
> > > completing the move at that point.
> > 
> > Requesting a maintenance interrupt is not as simple as it looks:
> > - ATM we don't know how to edit a living GICH_LR register, we would have
> > to add a function for that;
> 
> That doesn't sound like a great hardship. Perhaps you can reuse the
> setter function anyhow.
> 
> > - if we request a maintenance interrupt then we also need to EOI the
> > physical IRQ, that is something that we don't do anymore (unless
> > PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI but that is another matter). We would
> > need to understand that some physical irqs need to be EOI'ed by Xen and
> > some don't.
> 
> I was thinking the maintenance interrupt handler would take care of
> this.

In that case we would have to resurrect the code to loop over the
GICH_EISR* registers from maintenance_interrupt.
Anything can be done, I am just pointing out that this alternative
approach is not as cheap as it might sound.


> > Also requesting a maintenance interrupt would only guarantee that the
> > vcpu is interrupted as soon as possible, but it won't save us from
> > having to introduce GIC_IRQ_GUEST_MIGRATING.
> 
> I didn't expect GIC_IRQ_GUEST_MIGRATING to go away. If nothing else you
> would need it to flag to the maintenance IRQ that it needs to EOI
> +complete the migration.
> 
> >  It would only let us skip
> > adding vcpu_migrate_from and the 5 lines of code in
> > vgic_vcpu_inject_irq.
> 
> And the code in gic_update_one_lr I think, and most of
> vgic_vcpu_inject-cpu.
> And more than the raw lines of code the
> *complexity* would be much lower.

I don't know about the complexity. One thing is to completely get rid of
maintenance interrupts. Another is to get rid of them in most cases but
not all. Having to deal both with not having them and with having them,
increases complexity, at least in my view. It simpler to think that you
have them all the times or never.


In any case replying to this email made me realize that there is indeed
a lot of unneeded code in this patch, especially given that writing to
the physical ITARGETSR is guaranteed to affect pending (non active)
irqs.  From the ARM ARM:

"Software can write to an GICD_ITARGETSR at any time. Any change to a CPU
targets field value:

[...]

Has an effect on any pending interrupts. This means:
 — adding a CPU interface to the target list of a pending interrupt makes
   that interrupt pending on that CPU interface
 — removing a CPU interface from the target list of a pending interrupt
   removes the pending state of that interrupt on that CPU interface."


I think we can rely on this behaviour. Thanks to patch #5 we know that
we'll be receiving the second physical irq on the old cpu and from then
on the next ones always on the new cpu. So we won't need
vcpu_migrate_from, the complex ordering of MIGRATING and QUEUED, or the
maintenance_interrupt.

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-24 14:48         ` Stefano Stabellini
@ 2014-07-24 16:41           ` Ian Campbell
  2014-07-24 16:45             ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-24 16:41 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-24 at 15:48 +0100, Stefano Stabellini wrote:
> On Wed, 23 Jul 2014, Ian Campbell wrote:
> > On Wed, 2014-07-23 at 15:45 +0100, Stefano Stabellini wrote:
> > > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > > We need to take special care when migrating irqs that are already
> > > > > inflight from one vcpu to another. See "The effect of changes to an
> > > > > GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> > > > > Controller Architecture Specification.
> > > > > 
> > > > > The main issue from the Xen point of view is that the lr_pending and
> > > > > inflight lists are per-vcpu. The lock we take to protect them is also
> > > > > per-vcpu.
> > > > > 
> > > > > In order to avoid issues, if the irq is still lr_pending, we can
> > > > > immediately move it to the new vcpu for injection.
> > > > > 
> > > > > Otherwise if it is in a GICH_LR register, set a new flag
> > > > > GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> > > > > while the previous one is still inflight (given that we are only dealing
> > > > > with hardware interrupts here, it just means that its LR hasn't been
> > > > > cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> > > > > only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> > > > > one is the old vcpu, we introduce a new field to pending_irq, called
> > > > > vcpu_migrate_from.
> > > > > When clearing the LR on the old vcpu, we take special care of injecting
> > > > > the interrupt into the new vcpu. To do that we need to release the old
> > > > > vcpu lock before taking the new vcpu lock.
> > > > 
> > > > I still think this is an awful lot of complexity and scaffolding for
> > > > something which is rare on the scale of things and which could be almost
> > > > trivially handled by requesting a maintenance interrupt for one EOI and
> > > > completing the move at that point.
> > > 
> > > Requesting a maintenance interrupt is not as simple as it looks:
> > > - ATM we don't know how to edit a living GICH_LR register, we would have
> > > to add a function for that;
> > 
> > That doesn't sound like a great hardship. Perhaps you can reuse the
> > setter function anyhow.
> > 
> > > - if we request a maintenance interrupt then we also need to EOI the
> > > physical IRQ, that is something that we don't do anymore (unless
> > > PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI but that is another matter). We would
> > > need to understand that some physical irqs need to be EOI'ed by Xen and
> > > some don't.
> > 
> > I was thinking the maintenance interrupt handler would take care of
> > this.
> 
> In that case we would have to resurrect the code to loop over the
> GICH_EISR* registers from maintenance_interrupt.
> Anything can be done, I am just pointing out that this alternative
> approach is not as cheap as it might sound.

It's simple though, that's the benefit.

> > > Also requesting a maintenance interrupt would only guarantee that the
> > > vcpu is interrupted as soon as possible, but it won't save us from
> > > having to introduce GIC_IRQ_GUEST_MIGRATING.
> > 
> > I didn't expect GIC_IRQ_GUEST_MIGRATING to go away. If nothing else you
> > would need it to flag to the maintenance IRQ that it needs to EOI
> > +complete the migration.
> > 
> > >  It would only let us skip
> > > adding vcpu_migrate_from and the 5 lines of code in
> > > vgic_vcpu_inject_irq.
> > 
> > And the code in gic_update_one_lr I think, and most of
> > vgic_vcpu_inject-cpu.
> > And more than the raw lines of code the
> > *complexity* would be much lower.
> 
> I don't know about the complexity. One thing is to completely get rid of
> maintenance interrupts. Another is to get rid of them in most cases but
> not all. Having to deal both with not having them and with having them,
> increases complexity, at least in my view. It simpler to think that you
> have them all the times or never.

The way I view it is that the maintenance interrupt path is the dumb and
obvious one which is always there and can always be used and doesn't
need thinking about. Then the other optimisations are finding ways to
avoid actually using it, but can always fall back to the dumb way if
something too complex to deal with occurs.

> In any case replying to this email made me realize that there is indeed
> a lot of unneeded code in this patch, especially given that writing to
> the physical ITARGETSR is guaranteed to affect pending (non active)
> irqs.  From the ARM ARM:
> 
> "Software can write to an GICD_ITARGETSR at any time. Any change to a CPU
> targets field value:
> 
> [...]
> 
> Has an effect on any pending interrupts. This means:
>  — adding a CPU interface to the target list of a pending interrupt makes
>    that interrupt pending on that CPU interface
>  — removing a CPU interface from the target list of a pending interrupt
>    removes the pending state of that interrupt on that CPU interface."
> 
> 
> I think we can rely on this behaviour. Thanks to patch #5 we know that
> we'll be receiving the second physical irq on the old cpu and from then
> on the next ones always on the new cpu. So we won't need
> vcpu_migrate_from, the complex ordering of MIGRATING and QUEUED, or the
> maintenance_interrupt.

That would be good to avoid all that for sure and would certainly impact
my opinion of the complexity cost of this stuff.

Are you sure about the second physical IRQ always hitting on the source
pCPU though? I'm unclear about where the physical ITARGETSR gets written
in the scheme you are proposing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-24 16:41           ` Ian Campbell
@ 2014-07-24 16:45             ` Stefano Stabellini
  2014-07-24 16:48               ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-24 16:45 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 6315 bytes --]

On Thu, 24 Jul 2014, Ian Campbell wrote:
> On Thu, 2014-07-24 at 15:48 +0100, Stefano Stabellini wrote:
> > On Wed, 23 Jul 2014, Ian Campbell wrote:
> > > On Wed, 2014-07-23 at 15:45 +0100, Stefano Stabellini wrote:
> > > > On Thu, 17 Jul 2014, Ian Campbell wrote:
> > > > > On Thu, 2014-07-10 at 19:13 +0100, Stefano Stabellini wrote:
> > > > > > We need to take special care when migrating irqs that are already
> > > > > > inflight from one vcpu to another. See "The effect of changes to an
> > > > > > GICD_ITARGETSR", part of chapter 4.3.12 of the ARM Generic Interrupt
> > > > > > Controller Architecture Specification.
> > > > > > 
> > > > > > The main issue from the Xen point of view is that the lr_pending and
> > > > > > inflight lists are per-vcpu. The lock we take to protect them is also
> > > > > > per-vcpu.
> > > > > > 
> > > > > > In order to avoid issues, if the irq is still lr_pending, we can
> > > > > > immediately move it to the new vcpu for injection.
> > > > > > 
> > > > > > Otherwise if it is in a GICH_LR register, set a new flag
> > > > > > GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq
> > > > > > while the previous one is still inflight (given that we are only dealing
> > > > > > with hardware interrupts here, it just means that its LR hasn't been
> > > > > > cleared yet on the old vcpu).  If GIC_IRQ_GUEST_MIGRATING is set, we
> > > > > > only set GIC_IRQ_GUEST_QUEUED and interrupt the old vcpu. To know which
> > > > > > one is the old vcpu, we introduce a new field to pending_irq, called
> > > > > > vcpu_migrate_from.
> > > > > > When clearing the LR on the old vcpu, we take special care of injecting
> > > > > > the interrupt into the new vcpu. To do that we need to release the old
> > > > > > vcpu lock before taking the new vcpu lock.
> > > > > 
> > > > > I still think this is an awful lot of complexity and scaffolding for
> > > > > something which is rare on the scale of things and which could be almost
> > > > > trivially handled by requesting a maintenance interrupt for one EOI and
> > > > > completing the move at that point.
> > > > 
> > > > Requesting a maintenance interrupt is not as simple as it looks:
> > > > - ATM we don't know how to edit a living GICH_LR register, we would have
> > > > to add a function for that;
> > > 
> > > That doesn't sound like a great hardship. Perhaps you can reuse the
> > > setter function anyhow.
> > > 
> > > > - if we request a maintenance interrupt then we also need to EOI the
> > > > physical IRQ, that is something that we don't do anymore (unless
> > > > PLATFORM_QUIRK_GUEST_PIRQ_NEED_EOI but that is another matter). We would
> > > > need to understand that some physical irqs need to be EOI'ed by Xen and
> > > > some don't.
> > > 
> > > I was thinking the maintenance interrupt handler would take care of
> > > this.
> > 
> > In that case we would have to resurrect the code to loop over the
> > GICH_EISR* registers from maintenance_interrupt.
> > Anything can be done, I am just pointing out that this alternative
> > approach is not as cheap as it might sound.
> 
> It's simple though, that's the benefit.
> 
> > > > Also requesting a maintenance interrupt would only guarantee that the
> > > > vcpu is interrupted as soon as possible, but it won't save us from
> > > > having to introduce GIC_IRQ_GUEST_MIGRATING.
> > > 
> > > I didn't expect GIC_IRQ_GUEST_MIGRATING to go away. If nothing else you
> > > would need it to flag to the maintenance IRQ that it needs to EOI
> > > +complete the migration.
> > > 
> > > >  It would only let us skip
> > > > adding vcpu_migrate_from and the 5 lines of code in
> > > > vgic_vcpu_inject_irq.
> > > 
> > > And the code in gic_update_one_lr I think, and most of
> > > vgic_vcpu_inject-cpu.
> > > And more than the raw lines of code the
> > > *complexity* would be much lower.
> > 
> > I don't know about the complexity. One thing is to completely get rid of
> > maintenance interrupts. Another is to get rid of them in most cases but
> > not all. Having to deal both with not having them and with having them,
> > increases complexity, at least in my view. It simpler to think that you
> > have them all the times or never.
> 
> The way I view it is that the maintenance interrupt path is the dumb and
> obvious one which is always there and can always be used and doesn't
> need thinking about. Then the other optimisations are finding ways to
> avoid actually using it, but can always fall back to the dumb way if
> something too complex to deal with occurs.
> 
> > In any case replying to this email made me realize that there is indeed
> > a lot of unneeded code in this patch, especially given that writing to
> > the physical ITARGETSR is guaranteed to affect pending (non active)
> > irqs.  From the ARM ARM:
> > 
> > "Software can write to an GICD_ITARGETSR at any time. Any change to a CPU
> > targets field value:
> > 
> > [...]
> > 
> > Has an effect on any pending interrupts. This means:
> >  — adding a CPU interface to the target list of a pending interrupt makes
> >    that interrupt pending on that CPU interface
> >  — removing a CPU interface from the target list of a pending interrupt
> >    removes the pending state of that interrupt on that CPU interface."
> > 
> > 
> > I think we can rely on this behaviour. Thanks to patch #5 we know that
> > we'll be receiving the second physical irq on the old cpu and from then
> > on the next ones always on the new cpu. So we won't need
> > vcpu_migrate_from, the complex ordering of MIGRATING and QUEUED, or the
> > maintenance_interrupt.
> 
> That would be good to avoid all that for sure and would certainly impact
> my opinion of the complexity cost of this stuff.
> 
> Are you sure about the second physical IRQ always hitting on the source
> pCPU though? I'm unclear about where the physical ITARGETSR gets written
> in the scheme you are proposing.

It gets written right away if there are no inflight irqs. Otherwise it
gets written when clearing the LRs. That's why we are sure it is going
to hit the old cpu. If the vcpu gets descheduled after EOIing the irq,
that is also fine because Xen is going to clear the LRs on hypervisor
entry.

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-24 16:45             ` Stefano Stabellini
@ 2014-07-24 16:48               ` Ian Campbell
  2014-07-24 16:49                 ` Stefano Stabellini
  0 siblings, 1 reply; 37+ messages in thread
From: Ian Campbell @ 2014-07-24 16:48 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-24 at 17:45 +0100, Stefano Stabellini wrote:
> > Are you sure about the second physical IRQ always hitting on the source
> > pCPU though? I'm unclear about where the physical ITARGETSR gets written
> > in the scheme you are proposing.
> 
> It gets written right away if there are no inflight irqs.

There's no way that something can be pending in the physical GIC at this
point? i.e. because it happened since we took the trap?

>  Otherwise it
> gets written when clearing the LRs. That's why we are sure it is going
> to hit the old cpu. If the vcpu gets descheduled after EOIing the irq,
> that is also fine because Xen is going to clear the LRs on hypervisor
> entry.

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-24 16:48               ` Ian Campbell
@ 2014-07-24 16:49                 ` Stefano Stabellini
  2014-07-25  9:08                   ` Ian Campbell
  0 siblings, 1 reply; 37+ messages in thread
From: Stefano Stabellini @ 2014-07-24 16:49 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 24 Jul 2014, Ian Campbell wrote:
> On Thu, 2014-07-24 at 17:45 +0100, Stefano Stabellini wrote:
> > > Are you sure about the second physical IRQ always hitting on the source
> > > pCPU though? I'm unclear about where the physical ITARGETSR gets written
> > > in the scheme you are proposing.
> > 
> > It gets written right away if there are no inflight irqs.
> 
> There's no way that something can be pending in the physical GIC at this
> point? i.e. because it happened since we took the trap?

If it is pending is OK: ARM ARM states that writes to itarget affect
pending irqs too. Only active irqs are not affected. But we deal with
that case separately.


> >  Otherwise it
> > gets written when clearing the LRs. That's why we are sure it is going
> > to hit the old cpu. If the vcpu gets descheduled after EOIing the irq,
> > that is also fine because Xen is going to clear the LRs on hypervisor
> > entry.
> 
> Ian.
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v8 03/10] xen/arm: inflight irqs during migration
  2014-07-24 16:49                 ` Stefano Stabellini
@ 2014-07-25  9:08                   ` Ian Campbell
  0 siblings, 0 replies; 37+ messages in thread
From: Ian Campbell @ 2014-07-25  9:08 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Thu, 2014-07-24 at 17:49 +0100, Stefano Stabellini wrote:
> On Thu, 24 Jul 2014, Ian Campbell wrote:
> > On Thu, 2014-07-24 at 17:45 +0100, Stefano Stabellini wrote:
> > > > Are you sure about the second physical IRQ always hitting on the source
> > > > pCPU though? I'm unclear about where the physical ITARGETSR gets written
> > > > in the scheme you are proposing.
> > > 
> > > It gets written right away if there are no inflight irqs.
> > 
> > There's no way that something can be pending in the physical GIC at this
> > point? i.e. because it happened since we took the trap?
> 
> If it is pending is OK: ARM ARM states that writes to itarget affect
> pending irqs too. Only active irqs are not affected. But we deal with
> that case separately.

So at the point where we write itarget the pending IRQ can potentially
be injected onto the new pCPU immediately, since it won't have it's IRQs
disabled etc.

Does our locking cope with that case?

Also the pending we are talking about here is in the physical
distributor, right? Not the various software state bits which we track.

So an IRQ which we have in lr_pending is actually active in the real
distributor (since we must have taken the interrupt to get it onto our
lists). I'm not sure how/if that changes your reasoning about these
things.

Ian.

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2014-07-25  9:08 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-10 18:12 [PATCH v8 00/10] gic and vgic fixes and improvements Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 01/10] xen/arm: observe itargets setting in vgic_enable_irqs and vgic_disable_irqs Stefano Stabellini
2014-07-11 13:01   ` Julien Grall
2014-07-23 15:31     ` Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 02/10] xen/arm: move setting GIC_IRQ_GUEST_QUEUED earlier Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 03/10] xen/arm: inflight irqs during migration Stefano Stabellini
2014-07-17 12:44   ` Ian Campbell
2014-07-23 14:45     ` Stefano Stabellini
2014-07-23 15:38       ` Ian Campbell
2014-07-24 14:48         ` Stefano Stabellini
2014-07-24 16:41           ` Ian Campbell
2014-07-24 16:45             ` Stefano Stabellini
2014-07-24 16:48               ` Ian Campbell
2014-07-24 16:49                 ` Stefano Stabellini
2014-07-25  9:08                   ` Ian Campbell
2014-07-10 18:13 ` [PATCH v8 04/10] xen/arm: support irq delivery to vcpu > 0 Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 05/10] xen/arm: physical irq follow virtual irq Stefano Stabellini
2014-07-11 13:07   ` Julien Grall
2014-07-23 15:00     ` Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 06/10] xen: introduce sched_move_irqs Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 07/10] xen/arm: remove workaround to inject evtchn_irq on irq enable Stefano Stabellini
2014-07-11 13:10   ` Julien Grall
2014-07-17 12:50   ` Ian Campbell
2014-07-23 15:04     ` Stefano Stabellini
2014-07-23 16:09       ` Stefano Stabellini
2014-07-23 16:11         ` Ian Campbell
2014-07-23 16:12           ` Stefano Stabellini
2014-07-23 16:16             ` Ian Campbell
2014-07-24 14:37               ` Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 08/10] xen/arm: take the rank lock before accessing ipriority Stefano Stabellini
2014-07-17 12:51   ` Ian Campbell
2014-07-23 14:57     ` Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 09/10] xen: introduce bit access macros for the IRQ line status flags Stefano Stabellini
2014-07-11 13:15   ` Julien Grall
2014-07-23 14:52     ` Stefano Stabellini
2014-07-10 18:13 ` [PATCH v8 10/10] xen/arm: make accesses to desc->status flags atomic Stefano Stabellini
2014-07-17 12:52   ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.