All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/10] remove maintenance interrupts
@ 2014-04-02 15:00 Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 01/10] xen/arm: no need to set HCR_VI when using the vgic to inject irqs Stefano Stabellini
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:00 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Ian Campbell, Stefano Stabellini

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.
It also improves priority handling, keeping the highest priority
outstanding interrupts in the GICH_LR registers.


Changes in v6:
- remove double spin_lock on the vgic.lock introduced in v5.

Changes in v5:
- introduce lr_all_full() helper;
- do not rename virtual_irq to irq;
- replace "const long unsigned int" with "const unsigned long";
- remove useless "& GICH_LR_PHYSICAL_MASK" in gic_set_lr;
- add a comment in maintenance_interrupts to explain its new purpose;
- introduce gic_clear_one_lr;
- don't print p->lr in case the irq is in lr_pending, as it doesn't have
an LR associate to it;
- improve packing of struct pending_irq;
- #define GIC_INVALID_LR and use it instead of nr_lrs;
- gic_remove_from_queues need to be protected with a vgic lock;
- introduce ASSERTs to check the vgic is locked and interrupts are
disabled;
- improve in code comments;
- use list_for_each_entry_reverse instead of writing my own list walker.

Changes in v4:
- rebase;
- merged patch #3 and #4 into a single patch;
- improved in code comments;
- in gic_events_need_delivery go through inflight_irqs and only consider
enabled irqs;
- remove debug patch.

Changes in v3:
- add "no need to set HCR_VI when using the vgic to inject irqs";
- add "s/gic_set_guest_irq/gic_raise_guest_irq";
- add "xen/arm: call gic_clear_lrs on entry to the hypervisor";
- do not use the PENDING and ACTIVE state for HW interrupts;
- unify the inflight and non-inflight code paths in
vgic_vcpu_inject_irq;
- remove "avoid taking unconditionally the vgic.lock in gic_clear_lrs";
- add "xen/arm: gic_events_need_delivery and irq priorities";
- use spin_lock_irqsave and spin_unlock_irqrestore in gic_dump_info.

Changes in v2:
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr;
- simplify gic_clear_lrs;
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1);
- add a patch to keep track of the LR number in pending_irq;
- add a patch to set GICH_LR_PENDING to inject a second irq while the
first one is still active;
- add a patch to simplify and reduce the usage of gic.lock;
- add a patch to reduce the usage of vgic.lock;
- add a patch to use GICH_ELSR[01] to avoid reading all the GICH_LRs in
gic_clear_lrs;
- add a debug patch to print more info in gic_dump_info.



Stefano Stabellini (10):
      xen/arm: no need to set HCR_VI when using the vgic to inject irqs
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: set GICH_HCR_UIE if all the LRs are in use
      xen/arm: support HW interrupts, do not request maintenance_interrupts
      xen/arm: nr_lrs should be uint8_t
      xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
      xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq
      xen/arm: second irq injection while the first irq is still inflight
      xen/arm: don't protect GICH and lr_queue accesses with gic.lock
      xen/arm: gic_events_need_delivery and irq priorities

 xen/arch/arm/domain.c        |    2 +-
 xen/arch/arm/gic.c           |  280 +++++++++++++++++++++++++-----------------
 xen/arch/arm/irq.c           |    2 +-
 xen/arch/arm/time.c          |    2 +-
 xen/arch/arm/traps.c         |   10 ++
 xen/arch/arm/vgic.c          |   47 +++----
 xen/arch/arm/vtimer.c        |    4 +-
 xen/include/asm-arm/domain.h |   14 ++-
 xen/include/asm-arm/gic.h    |   10 +-
 9 files changed, 223 insertions(+), 148 deletions(-)

git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v6

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v6 01/10] xen/arm: no need to set HCR_VI when using the vgic to inject irqs
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 02/10] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq Stefano Stabellini
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

HCR_VI forces the guest to resume execution in IRQ mode and can actually
cause spurious interrupt injections.
The GIC is capable of injecting interrupts into the guest and causing it
to switch to IRQ mode automatically, without any need for the hypervisor
to set HCR_VI manually.

See ARM ARM B1.8.11 and chapter 5.4 of the Generic Interrupt Controller
Architecture Specification.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---

Changes in v4:
- improve commit message.
---
 xen/arch/arm/gic.c |   20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 91a2982..b388ef3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -726,22 +726,6 @@ void gic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&gic.lock, flags);
 }
 
-static void gic_inject_irq_start(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    WRITE_SYSREG(hcr | HCR_VI, HCR_EL2);
-    isb();
-}
-
-static void gic_inject_irq_stop(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    if (hcr & HCR_VI) {
-        WRITE_SYSREG(hcr & ~HCR_VI, HCR_EL2);
-        isb();
-    }
-}
-
 int gic_events_need_delivery(void)
 {
     return (!list_empty(&current->arch.vgic.lr_pending) ||
@@ -754,10 +738,6 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
 
     gic_restore_pending_irqs(current);
-    if (!gic_events_need_delivery())
-        gic_inject_irq_stop();
-    else
-        gic_inject_irq_start();
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 02/10] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 01/10] xen/arm: no need to set HCR_VI when using the vgic to inject irqs Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 03/10] xen/arm: set GICH_HCR_UIE if all the LRs are in use Stefano Stabellini
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 82a1e79..e6ddedf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -782,7 +782,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index b388ef3..dbba5d3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -735,7 +735,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
 }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index ba281e9..ae09c6b 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 553411d..aab490c 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -457,7 +457,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -685,7 +685,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 03/10] xen/arm: set GICH_HCR_UIE if all the LRs are in use
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 01/10] xen/arm: no need to set HCR_VI when using the vgic to inject irqs Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 02/10] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts Stefano Stabellini
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

---

Changes in v5:
- introduce lr_all_full() helper.

Changes in v2:
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).
---
 xen/arch/arm/gic.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index dbba5d3..a7b29d8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -57,6 +57,7 @@ static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
 static DEFINE_PER_CPU(uint64_t, lr_mask);
 
 static unsigned nr_lrs;
+#define lr_all_full() (this_cpu(lr_mask) == ((1 << nr_lrs) - 1))
 
 /* The GIC mapping of CPU interfaces does not necessarily match the
  * logical CPU numbering. Let's use mapping as returned by the GIC
@@ -738,6 +739,11 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) && lr_all_full() )
+        GICH[GICH_HCR] |= GICH_HCR_UIE;
+    else
+        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (2 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 03/10] xen/arm: set GICH_HCR_UIE if all the LRs are in use Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-03 10:55   ` Ian Campbell
  2014-04-02 15:01 ` [PATCH v6 05/10] xen/arm: nr_lrs should be uint8_t Stefano Stabellini
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW. Do not set GICH_LR_MAINTENANCE_IRQ.

Remove the code to EOI a physical interrupt on behalf of the guest
because it has become unnecessary.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs on entry to the hypervisor to make sure that the
calculation in Xen of the highest priority interrupt currently inflight
is correct and accurate and not based on stale data.

In vgic_vcpu_inject_irq, if the target is a vcpu running on another
pcpu, we are already sending an SGI to the other pcpu so that it would
pick up the new IRQ to inject.  Now also send an SGI to the other pcpu
even if the IRQ is already inflight, so that it can clear the LR
corresponding to the previous injection as well as injecting the new
interrupt.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v6:
- remove double spin_lock on the vgic.lock introduced in v5.

Changes in v5:
- do not rename virtual_irq to irq;
- replace "const long unsigned int" with "const unsigned long";
- remove useless "& GICH_LR_PHYSICAL_MASK" in gic_set_lr;
- add a comment in maintenance_interrupts to explain its new purpose.
- introduce gic_clear_one_lr.

Changes in v4:
- merged patch #3 and #4 into a single patch.

Changes in v2:
- remove the EOI code, now unnecessary;
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr.
---
 xen/arch/arm/gic.c        |  137 +++++++++++++++++++++------------------------
 xen/arch/arm/traps.c      |   10 ++++
 xen/arch/arm/vgic.c       |    3 +-
 xen/include/asm-arm/gic.h |    1 +
 4 files changed, 76 insertions(+), 75 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index a7b29d8..e1165e3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -68,6 +68,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_one_lr(struct vcpu *v, int i);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -626,16 +628,18 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
 static inline void gic_set_lr(int lr, struct pending_irq *p,
         unsigned int state)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
+    uint32_t lr_reg;
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
-        ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+    lr_reg = state | ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
         ((p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        lr_reg |= GICH_LR_HW | (p->desc->irq << GICH_LR_PHYSICAL_SHIFT);
+
+    GICH[GICH_LR + lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -695,6 +699,56 @@ out:
     return;
 }
 
+static void gic_clear_one_lr(struct vcpu *v, int i)
+{
+    struct pending_irq *p;
+    uint32_t lr;
+    int irq;
+    bool_t inflight;
+
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
+    lr = GICH[GICH_LR + i];
+    if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+    {
+        inflight = 0;
+        GICH[GICH_LR + i] = 0;
+        clear_bit(i, &this_cpu(lr_mask));
+
+        irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+        spin_lock(&gic.lock);
+        p = irq_to_pending(v, irq);
+        if ( p->desc != NULL )
+            p->desc->status &= ~IRQ_INPROGRESS;
+        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+        {
+            inflight = 1;
+            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+        }
+        spin_unlock(&gic.lock);
+        if ( !inflight )
+            list_del_init(&p->inflight);
+    }
+}
+
+void gic_clear_lrs(struct vcpu *v)
+{
+    int i = 0;
+    unsigned long flags;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    while ((i = find_next_bit((const unsigned long *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        gic_clear_one_lr(v, i);
+        i++;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -893,77 +947,14 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        int cpu;
-        bool_t inflight;
-
-        cpu = -1;
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
-            p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, p);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2, GICH_LR_PENDING);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
-        i++;
-    }
+    /* 
+     * The maintenance interrupt handler doesn't do anything anymore, but
+     * receiving the interrupt is going to cause gic_inject to be called on
+     * return to guest that is going to clear the old LRs and inject new
+     * interrupts.
+     */
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 21c7b26..dd936be 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -68,6 +68,7 @@ static int debug_stack_lines = 40;
 
 integer_param("debug_stack_lines", debug_stack_lines);
 
+static void enter_hypervisor_head(void);
 
 void __cpuinit init_traps(void)
 {
@@ -1543,6 +1544,8 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 {
     union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
 
+    enter_hypervisor_head();
+
     switch (hsr.ec) {
     case HSR_EC_WFI_WFE:
         if ( !check_conditional_instr(regs, hsr) )
@@ -1620,11 +1623,13 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 
 asmlinkage void do_trap_irq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 0);
 }
 
 asmlinkage void do_trap_fiq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 1);
 }
 
@@ -1642,6 +1647,11 @@ asmlinkage void leave_hypervisor_tail(void)
     }
 }
 
+static void enter_hypervisor_head(void)
+{
+    gic_clear_lrs(current);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index aab490c..566f0ff 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -701,8 +701,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6fce5c2..ebb90c6 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -220,6 +220,7 @@ extern unsigned int gic_number_lines(void);
 /* IRQ translation function for the device tree */
 int gic_irq_xlate(const u32 *intspec, unsigned int intsize,
                   unsigned int *out_hwirq, unsigned int *out_type);
+void gic_clear_lrs(struct vcpu *v);
 
 #endif /* __ASSEMBLY__ */
 #endif
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 05/10] xen/arm: nr_lrs should be uint8_t
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (3 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 06/10] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq Stefano Stabellini
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e1165e3..e0b9683 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -56,7 +56,7 @@ static irq_desc_t irq_desc[NR_IRQS];
 static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
 static DEFINE_PER_CPU(uint64_t, lr_mask);
 
-static unsigned nr_lrs;
+static uint8_t nr_lrs;
 #define lr_all_full() (this_cpu(lr_mask) == ((1 << nr_lrs) - 1))
 
 /* The GIC mapping of CPU interfaces does not necessarily match the
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 06/10] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (4 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 05/10] xen/arm: nr_lrs should be uint8_t Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 07/10] xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq Stefano Stabellini
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v5:
- don't print p->lr in case the irq is in lr_pending, as it doesn't have
an LR associate to it;
- improve packing of struct pending_irq;
- #define GIC_INVALID_LR and use it instead of nr_lrs.
---
 xen/arch/arm/gic.c           |    4 +++-
 xen/include/asm-arm/domain.h |    4 +++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e0b9683..de5eadf 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -643,6 +643,7 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    p->lr = lr;
 }
 
 static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
@@ -721,6 +722,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        p->lr = GIC_INVALID_LR;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
@@ -974,7 +976,7 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d\n", p->irq);
+        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bc20a15..2d94d59 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -21,7 +21,6 @@ struct vgic_irq_rank {
 
 struct pending_irq
 {
-    int irq;
     /*
      * The following two states track the lifecycle of the guest irq.
      * However because we are not sure and we don't want to track
@@ -60,6 +59,9 @@ struct pending_irq
 #define GIC_IRQ_GUEST_ENABLED  2
     unsigned long status;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
+    int irq;
+#define GIC_INVALID_LR         ~(uint8_t)0
+    uint8_t lr;
     uint8_t priority;
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 07/10] xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (5 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 06/10] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight Stefano Stabellini
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Rename gic_set_guest_irq to gic_raise_guest_irq and remove the state
parameter.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c        |    8 ++++----
 xen/arch/arm/vgic.c       |    4 ++--
 xen/include/asm-arm/gic.h |    4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index de5eadf..40e768e 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -675,8 +675,8 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock_irqrestore(&gic.lock, flags);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
-        unsigned int state, unsigned int priority)
+void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+        unsigned int priority)
 {
     int i;
     unsigned long flags;
@@ -688,7 +688,7 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, irq_to_pending(v, virtual_irq), state);
+            gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
             goto out;
         }
     }
@@ -727,7 +727,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
             inflight = 1;
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            gic_raise_guest_irq(v, irq, p->priority);
         }
         spin_unlock(&gic.lock);
         if ( !inflight )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 566f0ff..3913cf5 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -390,7 +390,7 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
@@ -719,7 +719,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
-        gic_set_guest_irq(v, irq, GICH_LR_PENDING, priority);
+        gic_raise_guest_irq(v, irq, priority);
 
     list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
     {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index ebb90c6..5a9dc77 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -178,8 +178,8 @@ extern void gic_clear_pending_irqs(struct vcpu *v);
 extern int gic_events_need_delivery(void);
 
 extern void __cpuinit init_maintenance_interrupt(void);
-extern void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
-        unsigned int state, unsigned int priority);
+extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
+        unsigned int priority);
 extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
 extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (6 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 07/10] xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-03 11:12   ` Ian Campbell
  2014-04-02 15:01 ` [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock Stefano Stabellini
  2014-04-02 15:01 ` [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities Stefano Stabellini
  9 siblings, 1 reply; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v3:
- do not use the PENDING and ACTIVE state for HW interrupts;
- unify the inflight and non-inflight code paths in
vgic_vcpu_inject_irq.
---
 xen/arch/arm/gic.c  |   37 ++++++++++++++++++++++++-------------
 xen/arch/arm/vgic.c |   33 +++++++++++++++++----------------
 2 files changed, 41 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 40e768e..218e3c6 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -680,6 +680,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
 {
     int i;
     unsigned long flags;
+    struct pending_irq *n = irq_to_pending(v, virtual_irq);
+
+    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
+    {
+        if ( v == current )
+            gic_clear_one_lr(v, n->lr);
+        return;
+    }
 
     spin_lock_irqsave(&gic.lock, flags);
 
@@ -705,20 +713,27 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
     struct pending_irq *p;
     uint32_t lr;
     int irq;
-    bool_t inflight;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
 
     lr = GICH[GICH_LR + i];
-    if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
     {
-        inflight = 0;
+        /* HW interrupts cannot be ACTIVE and PENDING */
+        if ( p->desc == NULL &&
+             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
-        irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-        spin_lock(&gic.lock);
-        p = irq_to_pending(v, irq);
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -726,12 +741,11 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
-            inflight = 1;
             gic_raise_guest_irq(v, irq, p->priority);
-        }
-        spin_unlock(&gic.lock);
-        if ( !inflight )
+        } else
             list_del_init(&p->inflight);
+
+        spin_unlock(&gic.lock);
     }
 }
 
@@ -791,9 +805,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
 
     if ( !list_empty(&current->arch.vgic.lr_pending) && lr_all_full() )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3913cf5..dc3a75f 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -389,7 +389,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -696,14 +700,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
-    if ( !list_empty(&n->inflight) )
-    {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
-            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
-    }
-
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
@@ -715,21 +711,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     n->irq = irq;
     set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-    n->priority = priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
         gic_raise_guest_irq(v, irq, priority);
 
-    list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
+    if ( list_empty(&n->inflight) )
     {
-        if ( iter->priority > priority )
+        n->priority = priority;
+        list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
         {
-            list_add_tail(&n->inflight, &iter->inflight);
-            goto out;
+            if ( iter->priority > priority )
+            {
+                list_add_tail(&n->inflight, &iter->inflight);
+                goto out;
+            }
         }
-    }
-    list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+    } else if ( n->priority != priority )
+        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");
+
 out:
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     /* we have a new higher priority irq, inject it into the guest */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (7 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-03 11:14   ` Ian Campbell
  2014-04-02 15:01 ` [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities Stefano Stabellini
  9 siblings, 1 reply; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

GICH is banked, protect accesses by disabling interrupts.
Protect lr_queue accesses with the vgic.lock only.
gic.lock only protects accesses to GICD now.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v5:
- gic_remove_from_queues need to be protected with a vgic lock;
- introduce ASSERTs to check the vgic is locked and interrupts are
disabled.

Changes in v4:
- improved in code comments.
---
 xen/arch/arm/gic.c           |   35 ++++++++++++++++-------------------
 xen/arch/arm/vgic.c          |    9 +++++++--
 xen/include/asm-arm/domain.h |    5 ++++-
 3 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 218e3c6..611990d 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -119,6 +119,7 @@ void gic_save_state(struct vcpu *v)
 void gic_restore_state(struct vcpu *v)
 {
     int i;
+    ASSERT(!local_irq_is_enabled());
 
     if ( is_idle_vcpu(v) )
         return;
@@ -630,6 +631,7 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
 {
     uint32_t lr_reg;
 
+    ASSERT(!local_irq_is_enabled());
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
@@ -650,6 +652,8 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
 {
     struct pending_irq *iter;
 
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
     if ( !list_empty(&n->lr_queue) )
         return;
 
@@ -669,19 +673,20 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
     unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         unsigned int priority)
 {
     int i;
-    unsigned long flags;
     struct pending_irq *n = irq_to_pending(v, virtual_irq);
 
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
     if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
     {
         if ( v == current )
@@ -689,23 +694,17 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         return;
     }
 
-    spin_lock_irqsave(&gic.lock, flags);
-
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
-            goto out;
+            return;
         }
     }
 
     gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
-
-out:
-    spin_unlock_irqrestore(&gic.lock, flags);
-    return;
 }
 
 static void gic_clear_one_lr(struct vcpu *v, int i)
@@ -715,6 +714,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
     int irq;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
+    ASSERT(!local_irq_is_enabled());
 
     lr = GICH[GICH_LR + i];
     irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
@@ -729,8 +729,6 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
-        spin_lock(&gic.lock);
-
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
@@ -744,8 +742,6 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
             gic_raise_guest_irq(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
-
-        spin_unlock(&gic.lock);
     }
 }
 
@@ -776,11 +772,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if ( i >= nr_lrs ) return;
 
-        spin_lock_irqsave(&gic.lock, flags);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(i, p, GICH_LR_PENDING);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&gic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     }
 
 }
@@ -788,13 +784,12 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 void gic_clear_pending_irqs(struct vcpu *v)
 {
     struct pending_irq *p, *t;
-    unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
     v->arch.lr_mask = 0;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 int gic_events_need_delivery(void)
@@ -805,6 +800,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    ASSERT(!local_irq_is_enabled());
+
     gic_restore_pending_irqs(current);
 
     if ( !list_empty(&current->arch.vgic.lr_pending) && lr_all_full() )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index dc3a75f..bd15be7 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -393,8 +393,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
             vgic_vcpu_inject_irq(v, irq);
-        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v, irq, p->priority);
+        else {
+            unsigned long flags;
+            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+                gic_raise_guest_irq(v, irq, p->priority);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        }
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 2d94d59..dcbeba1 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -67,7 +67,10 @@ struct pending_irq
      * vgic.inflight_irqs */
     struct list_head inflight;
     /* lr_queue is used to append instances of pending_irq to
-     * gic.lr_pending */
+     * lr_pending. lr_pending is a per vcpu queue, therefore lr_queue
+     * accesses are protected with the vgic lock.
+     * TODO: when implementing irq migration, taking only the current
+     * vgic lock is not going to be enough. */
     struct list_head lr_queue;
 };
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities
  2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
                   ` (8 preceding siblings ...)
  2014-04-02 15:01 ` [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock Stefano Stabellini
@ 2014-04-02 15:01 ` Stefano Stabellini
  2014-04-03 11:37   ` Ian Campbell
  9 siblings, 1 reply; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-02 15:01 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

gic_events_need_delivery should only return positive if an outstanding
pending irq has an higher priority than the currently active irq and the
priority mask.
Rewrite the function by going through the priority ordered inflight and
lr_queue lists.

In gic_restore_pending_irqs replace lower priority pending (and not
active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
are available.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---
Changes in v5:
- improve in code comments;
- use list_for_each_entry_reverse instead of writing my own list walker.

Changes in v4:
- in gic_events_need_delivery go through inflight_irqs and only consider
enabled irqs.
---
 xen/arch/arm/gic.c           |   77 ++++++++++++++++++++++++++++++++++++++----
 xen/include/asm-arm/domain.h |    5 +--
 xen/include/asm-arm/gic.h    |    3 ++
 3 files changed, 76 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 611990d..7faa0e9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -721,6 +721,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
     p = irq_to_pending(v, irq);
     if ( lr & GICH_LR_ACTIVE )
     {
+        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         /* HW interrupts cannot be ACTIVE and PENDING */
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
@@ -735,6 +736,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = GIC_INVALID_LR;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
@@ -763,22 +765,51 @@ void gic_clear_lrs(struct vcpu *v)
 
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
-    int i;
-    struct pending_irq *p, *t;
+    int i = 0, lrs = nr_lrs;
+    struct pending_irq *p, *t, *p_r;
     unsigned long flags;
 
+    if ( list_empty(&v->arch.vgic.lr_pending) )
+        return;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
-        if ( i >= nr_lrs ) return;
+        if ( i >= nr_lrs )
+        {
+            list_for_each_entry_reverse( p_r,
+                                         &v->arch.vgic.inflight_irqs,
+                                         inflight )
+            {
+                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
+                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
+                    goto found;
+                if ( &p_r->inflight == p->inflight.next )
+                    goto out;
+            }
+            goto out;
+
+found:
+            i = p_r->lr;
+            p_r->lr = GIC_INVALID_LR;
+            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
+            gic_add_to_lr_pending(v, p_r);
+        }
 
-        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(i, p, GICH_LR_PENDING);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        lrs--;
+        if ( lrs == 0 )
+            break;
     }
 
+out:
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
@@ -794,8 +825,40 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 int gic_events_need_delivery(void)
 {
-    return (!list_empty(&current->arch.vgic.lr_pending) ||
-            this_cpu(lr_mask));
+    int mask_priority, lrs = nr_lrs;
+    int max_priority = 0xff, active_priority = 0xff;
+    struct vcpu *v = current;
+    struct pending_irq *p;
+    unsigned long flags;
+
+    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    /* TODO: We order the guest irqs by priority, but we don't change
+     * the priority of host irqs. */
+    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
+        {
+            if ( p->priority < active_priority )
+                active_priority = p->priority;
+        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
+            if ( p->priority < max_priority )
+                max_priority = p->priority;
+        }
+        lrs--;
+        if ( lrs == 0 )
+            break;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+    if ( max_priority < active_priority &&
+         (max_priority >> 3) < mask_priority )
+        return 1;
+    else
+        return 0;
 }
 
 void gic_inject(void)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index dcbeba1..696f36c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -55,8 +55,9 @@ struct pending_irq
      *
      */
 #define GIC_IRQ_GUEST_PENDING  0
-#define GIC_IRQ_GUEST_VISIBLE  1
-#define GIC_IRQ_GUEST_ENABLED  2
+#define GIC_IRQ_GUEST_ACTIVE   1
+#define GIC_IRQ_GUEST_VISIBLE  2
+#define GIC_IRQ_GUEST_ENABLED  3
     unsigned long status;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     int irq;
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5a9dc77..5d8f7f1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,9 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+#define GICH_VMCR_PRIORITY_MASK   0x1f
+#define GICH_VMCR_PRIORITY_SHIFT  27
+
 /*
  * The minimum GICC_BPR is required to be in the range 0-3. We set
  * GICC_BPR to 0 but we must expect that it might be 3. This means we
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts
  2014-04-02 15:01 ` [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts Stefano Stabellini
@ 2014-04-03 10:55   ` Ian Campbell
  0 siblings, 0 replies; 20+ messages in thread
From: Ian Campbell @ 2014-04-03 10:55 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW. Do not set GICH_LR_MAINTENANCE_IRQ.
> 
> Remove the code to EOI a physical interrupt on behalf of the guest
> because it has become unnecessary.
> 
> Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> registers, clear the invalid ones and free the corresponding interrupts
> from the inflight queue if appropriate. Add the interrupt to lr_pending
> if the GIC_IRQ_GUEST_PENDING is still set.
> 
> Call gic_clear_lrs on entry to the hypervisor to make sure that the
> calculation in Xen of the highest priority interrupt currently inflight
> is correct and accurate and not based on stale data.
> 
> In vgic_vcpu_inject_irq, if the target is a vcpu running on another
> pcpu, we are already sending an SGI to the other pcpu so that it would
> pick up the new IRQ to inject.  Now also send an SGI to the other pcpu
> even if the IRQ is already inflight, so that it can clear the LR
> corresponding to the previous injection as well as injecting the new
> interrupt.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> ---
> 
> Changes in v6:

You hadn't seen my comments on when I asked you to resend this, so you
haven't addressed them, but that is fair enough.

Ian.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight
  2014-04-02 15:01 ` [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight Stefano Stabellini
@ 2014-04-03 11:12   ` Ian Campbell
  2014-04-06 16:46     ` Stefano Stabellini
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Campbell @ 2014-04-03 11:12 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
(picking up where I left on v5 when I realised there had been a v6)


> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 40e768e..218e3c6 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -680,6 +680,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>  {
>      int i;
>      unsigned long flags;
> +    struct pending_irq *n = irq_to_pending(v, virtual_irq);
> +
> +    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
> +    {
> +        if ( v == current )
> +            gic_clear_one_lr(v, n->lr);

Maybe this is the same confusion I had with v5 in a different hat, but
why?

If the IRQ is visible to the guest (present in an LR, either active or
pending) and a new one comes along why would we clear the LR?

Is it so that we can reinsert it again right after? If so please could
you add a comment saying that.

By doing thing this way don't we lose the active bit in the LR if it was
set?

> +        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
> +    } else if ( n->priority != priority )
> +        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");

Would it be OK to silently ignore this? Changing the priority of an
interrupt which isn't masked is going to be racy even on real hardware,
i.e. you might receive one more at the old priority. I haven't checked
what the GIC docs say about this -- it could well be unpredicatable or
something...

Ian.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock
  2014-04-02 15:01 ` [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock Stefano Stabellini
@ 2014-04-03 11:14   ` Ian Campbell
  0 siblings, 0 replies; 20+ messages in thread
From: Ian Campbell @ 2014-04-03 11:14 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> GICH is banked, protect accesses by disabling interrupts.
> Protect lr_queue accesses with the vgic.lock only.
> gic.lock only protects accesses to GICD now.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities
  2014-04-02 15:01 ` [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities Stefano Stabellini
@ 2014-04-03 11:37   ` Ian Campbell
  2014-04-06 19:27     ` Stefano Stabellini
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Campbell @ 2014-04-03 11:37 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> gic_events_need_delivery should only return positive if an outstanding
> pending irq has an higher priority than the currently active irq and the
> priority mask.
> Rewrite the function by going through the priority ordered inflight and
> lr_queue lists.
> 
> In gic_restore_pending_irqs replace lower priority pending (and not
> active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> are available.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> ---
> Changes in v5:
> - improve in code comments;
> - use list_for_each_entry_reverse instead of writing my own list walker.
> 
> Changes in v4:
> - in gic_events_need_delivery go through inflight_irqs and only consider
> enabled irqs.
> ---
>  xen/arch/arm/gic.c           |   77 ++++++++++++++++++++++++++++++++++++++----
>  xen/include/asm-arm/domain.h |    5 +--
>  xen/include/asm-arm/gic.h    |    3 ++
>  3 files changed, 76 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 611990d..7faa0e9 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -721,6 +721,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)

Looking at the complete function, a better name would include "try"
somewhere, since it doesn't unconditionally clear things. In fact it's
almost more of a "sync the h/w and s/w state" function, is it?

>      p = irq_to_pending(v, irq);
>      if ( lr & GICH_LR_ACTIVE )
>      {
> +        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);

This appears to be unmotivated by the commit message. Is this just a
case of needing to more carefully manage the software ACTIVE state to
match the h/w?

>          /* HW interrupts cannot be ACTIVE and PENDING */
>          if ( p->desc == NULL &&
>               test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&


> @@ -735,6 +736,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
>          if ( p->desc != NULL )
>              p->desc->status &= ~IRQ_INPROGRESS;
>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);

Same as above I guess?

>          p->lr = GIC_INVALID_LR;
>          if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
>                  test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> @@ -763,22 +765,51 @@ void gic_clear_lrs(struct vcpu *v)
>  
>  static void gic_restore_pending_irqs(struct vcpu *v)
>  {
> -    int i;
> -    struct pending_irq *p, *t;
> +    int i = 0, lrs = nr_lrs;
> +    struct pending_irq *p, *t, *p_r;
>      unsigned long flags;
>  
> +    if ( list_empty(&v->arch.vgic.lr_pending) )
> +        return;

This is unlocked, according to the previous patch access to lr_pending
is supposed to be protected by vgic.lock.

If this lock free check is safe for some reason then please add a
comment.

> +
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
>      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
>      {
>          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> -        if ( i >= nr_lrs ) return;
> +        if ( i >= nr_lrs )

This where we find the LRs are full and we look for a lower priority
interrupt to evict from the LRs? Isn't this check ~= lr_all_full, which
would be more obvious, and avoid a find_first_zero_bit in the case where
things are full.

Also a comment explaining that we are looking to evict a lower priority
interrupt would be useful. Do we not need to check for LRs which can
simply be cleared? I suppose that must happen elsewhere.

> +        {
> +            list_for_each_entry_reverse( p_r,
> +                                         &v->arch.vgic.inflight_irqs,
> +                                         inflight )
> +            {
> +                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
> +                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
> +                    goto found;
> +                if ( &p_r->inflight == p->inflight.next )
> +                    goto out;

Can't we stop the search as soon as we find a higher priority interrupt
than the one we are trying to inject?

Is this nested loop O(n^2) in the number of inflight interrupts?

Is there a possible algorithm which involves picking the head from
whichever of inflight_irqs or pending_irqs has the highest priority,
until the LRs are full (or the lists are empty) and then putting the
remainder of pending back into the inflight list?

Or perhaps because we know that the two lists are sorted we can avoid a
complete search of inflight_irqs on every outer loop by remembering how
far we got last time and restarting there? i.e. if you gave up on an
interrupt with priority N+1 last time then there isn't much point in
checking for an interrupt with priority N this time around.

I'm also unsure why this function both uses find_first_zero_bit and
counts the lrs it has unassigned in the "lrs" variable. Is one or the
other not sufficient?

> +            }
> +            goto out;
> +
> +found:
> +            i = p_r->lr;
> +            p_r->lr = GIC_INVALID_LR;
> +            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
> +            gic_add_to_lr_pending(v, p_r);
> +        }
>  
> -        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>          gic_set_lr(i, p, GICH_LR_PENDING);
>          list_del_init(&p->lr_queue);
>          set_bit(i, &this_cpu(lr_mask));
> -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +
> +        lrs--;
> +        if ( lrs == 0 )
> +            break;
>      }
>  
> +out:
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>  }
>  
>  void gic_clear_pending_irqs(struct vcpu *v)
> @@ -794,8 +825,40 @@ void gic_clear_pending_irqs(struct vcpu *v)
>  
>  int gic_events_need_delivery(void)
>  {
> -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> -            this_cpu(lr_mask));
> +    int mask_priority, lrs = nr_lrs;
> +    int max_priority = 0xff, active_priority = 0xff;
> +    struct vcpu *v = current;
> +    struct pending_irq *p;
> +    unsigned long flags;
> +
> +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> +
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    /* TODO: We order the guest irqs by priority, but we don't change
> +     * the priority of host irqs. */
> +    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
> +    {
> +        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
> +        {
> +            if ( p->priority < active_priority )
> +                active_priority = p->priority;
> +        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
> +            if ( p->priority < max_priority )
> +                max_priority = p->priority;
> +        }

Can't you do the comparison against mask_priority in this loop and
simply return true as soon as you find an interrupt which needs an
upcall?

Can't you also stop searching when p->priority > mask_priority, since
inflight_irqs is order you know all the rest of the list has lower
(numerically higher) priority,

> +        lrs--;
> +        if ( lrs == 0 )
> +            break;
> +    }
> +
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +
> +    if ( max_priority < active_priority &&
> +         (max_priority >> 3) < mask_priority )

Why >> 3? Something to do with our emulation or the BPR?

> +        return 1;
> +    else
> +        return 0;
>  }

Ian.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight
  2014-04-03 11:12   ` Ian Campbell
@ 2014-04-06 16:46     ` Stefano Stabellini
  0 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-06 16:46 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 3 Apr 2014, Ian Campbell wrote:
> On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> (picking up where I left on v5 when I realised there had been a v6)
> 
> 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 40e768e..218e3c6 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -680,6 +680,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> >  {
> >      int i;
> >      unsigned long flags;
> > +    struct pending_irq *n = irq_to_pending(v, virtual_irq);
> > +
> > +    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
> > +    {
> > +        if ( v == current )
> > +            gic_clear_one_lr(v, n->lr);
> 
> Maybe this is the same confusion I had with v5 in a different hat, but
> why?
> 
> If the IRQ is visible to the guest (present in an LR, either active or
> pending) and a new one comes along why would we clear the LR?

Although gic_clear_one_lr is mostly about clearing an LR once the guest
irq handling is completed (hence the name), it is smart enough to avoid
clearing the LR if the irq is still GICH_LR_ACTIVE. It is also able to
to inject a second irq while the first is already GICH_LR_ACTIVE (by
setting both GICH_LR_ACTIVE and GICH_LR_PENDING).

By calling gic_clear_one_lr here, we make sure that:

- we clear the LR of the old guest irq if it has been already EOIed
- we inject the new irq in a new LR if old guest irq has already been
  EOIed and cleared
- we inject the new irq while the old one is still GICH_LR_ACTIVE by
  setting both GICH_LR_ACTIVE and GICH_LR_PENDING
- we clear GIC_IRQ_GUEST_PENDING (no further injections) if the old irq
  is still GICH_LR_PENDING

In summary we do everything we can to handle the guest irq as soon as
possible.


> Is it so that we can reinsert it again right after? If so please could
> you add a comment saying that.
> 
> By doing thing this way don't we lose the active bit in the LR if it was
> set?
> 
> > +        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
> > +    } else if ( n->priority != priority )
> > +        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");
> 
> Would it be OK to silently ignore this? Changing the priority of an
> interrupt which isn't masked is going to be racy even on real hardware,
> i.e. you might receive one more at the old priority. I haven't checked
> what the GIC docs say about this -- it could well be unpredicatable or
> something...

I agree, I'll remove the gdprintk.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities
  2014-04-03 11:37   ` Ian Campbell
@ 2014-04-06 19:27     ` Stefano Stabellini
  2014-04-07  9:43       ` Ian Campbell
  0 siblings, 1 reply; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-06 19:27 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Thu, 3 Apr 2014, Ian Campbell wrote:
> On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> > gic_events_need_delivery should only return positive if an outstanding
> > pending irq has an higher priority than the currently active irq and the
> > priority mask.
> > Rewrite the function by going through the priority ordered inflight and
> > lr_queue lists.
> > 
> > In gic_restore_pending_irqs replace lower priority pending (and not
> > active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> > are available.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > ---
> > Changes in v5:
> > - improve in code comments;
> > - use list_for_each_entry_reverse instead of writing my own list walker.
> > 
> > Changes in v4:
> > - in gic_events_need_delivery go through inflight_irqs and only consider
> > enabled irqs.
> > ---
> >  xen/arch/arm/gic.c           |   77 ++++++++++++++++++++++++++++++++++++++----
> >  xen/include/asm-arm/domain.h |    5 +--
> >  xen/include/asm-arm/gic.h    |    3 ++
> >  3 files changed, 76 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 611990d..7faa0e9 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -721,6 +721,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
> 
> Looking at the complete function, a better name would include "try"
> somewhere, since it doesn't unconditionally clear things. In fact it's
> almost more of a "sync the h/w and s/w state" function, is it?

Yes, that is true. I'll rename it to gic_update_one_lr.


> >      p = irq_to_pending(v, irq);
> >      if ( lr & GICH_LR_ACTIVE )
> >      {
> > +        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> 
> This appears to be unmotivated by the commit message. Is this just a
> case of needing to more carefully manage the software ACTIVE state to
> match the h/w?

The new GIC_IRQ_GUEST_ACTIVE bit is needed to track which one (if any)
is the currenly active irq. We need the information to calculate
priorities in gic_events_need_delivery.

I'll add a note to the commit message to clarify why we are introducing
it.


> >          /* HW interrupts cannot be ACTIVE and PENDING */
> >          if ( p->desc == NULL &&
> >               test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
> 
> 
> > @@ -735,6 +736,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
> >          if ( p->desc != NULL )
> >              p->desc->status &= ~IRQ_INPROGRESS;
> >          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > +        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> 
> Same as above I guess?
> 
> >          p->lr = GIC_INVALID_LR;
> >          if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> >                  test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> > @@ -763,22 +765,51 @@ void gic_clear_lrs(struct vcpu *v)
> >  
> >  static void gic_restore_pending_irqs(struct vcpu *v)
> >  {
> > -    int i;
> > -    struct pending_irq *p, *t;
> > +    int i = 0, lrs = nr_lrs;
> > +    struct pending_irq *p, *t, *p_r;
> >      unsigned long flags;
> >  
> > +    if ( list_empty(&v->arch.vgic.lr_pending) )
> > +        return;
> 
> This is unlocked, according to the previous patch access to lr_pending
> is supposed to be protected by vgic.lock.

Well spotted, I'll fix it.


> If this lock free check is safe for some reason then please add a
> comment.
> 
> > +
> > +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> >      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
> >      {
> >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> > -        if ( i >= nr_lrs ) return;
> > +        if ( i >= nr_lrs )
> 
> This where we find the LRs are full and we look for a lower priority
> interrupt to evict from the LRs?

Yep.


> Isn't this check ~= lr_all_full, which
> would be more obvious, and avoid a find_first_zero_bit in the case where
> things are full.

Consider that we are inside a list_for_each_entry_safe loop, adding
lr_pending irqs to GICH_LR registers one by one: we need the
find_first_zero_bit.



> Also a comment explaining that we are looking to evict a lower priority
> interrupt would be useful.

Good point, I'll add a comment.


> Do we not need to check for LRs which can
> simply be cleared? I suppose that must happen elsewhere.

Yes, it happens on entry to the hypervisor.


> > +        {
> > +            list_for_each_entry_reverse( p_r,
> > +                                         &v->arch.vgic.inflight_irqs,
> > +                                         inflight )
> > +            {
> > +                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
> > +                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
> > +                    goto found;
> > +                if ( &p_r->inflight == p->inflight.next )
> > +                    goto out;
> 
> Can't we stop the search as soon as we find a higher priority interrupt
> than the one we are trying to inject?

Actually we are already doing that. Currently we stop when we find a
suitable lower priority irq, or when the next irq in inflight_irqs is
the same that we are already processing from lr_pending. Given that both
lr_pending and inflight_irqs are ordered by priority (a new irq is
inserted between the last irq with the same priority and the first with
lower priority) and that we are walking inflight_irqs in reverse, it is
the same as you are suggesting.  We are stopping when the next in line
is the irq we are already evaluating from lr_pending, therefore we know
for sure that all the following irqs in inflight_irqs have the same or
higher priority.


> Is this nested loop O(n^2) in the number of inflight interrupts?

We only run the outer loop nr_lrs times (see the lrs == 0 check).
Also:

- the lists are ordered by priority and we exploit that
- lr_pending is a subset of inflight
- we stop as soon as we exaust the lower priority irqs that we can evict
- nr_lrs is very limited and we can at most do nr_lrs substitutions

In my tests thanks to the list ordering it actually takes only one step
of the inner loop to find an irq to evict and the case when we need to
do any evicting at all is very rare.


> Is there a possible algorithm which involves picking the head from
> whichever of inflight_irqs or pending_irqs has the highest priority,
> until the LRs are full (or the lists are empty) and then putting the
> remainder of pending back into the inflight list?
> 
> Or perhaps because we know that the two lists are sorted we can avoid a
> complete search of inflight_irqs on every outer loop by remembering how
> far we got last time and restarting there? i.e. if you gave up on an
> interrupt with priority N+1 last time then there isn't much point in
> checking for an interrupt with priority N this time around.

This is exactly what my previous version of the patch did:

http://marc.info/?l=xen-devel&m=139523245301114

but you asked me to change it :-)


> I'm also unsure why this function both uses find_first_zero_bit and
> counts the lrs it has unassigned in the "lrs" variable. Is one or the
> other not sufficient?

find_first_zero_bit is used to find free LRs.
lrs is used to limit the iterations of the outer loop: no point in
trying to fill or substitute more than nr_lrs LRs as lr_pending is
already order by priority.


> > +            }
> > +            goto out;
> > +
> > +found:
> > +            i = p_r->lr;
> > +            p_r->lr = GIC_INVALID_LR;
> > +            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
> > +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
> > +            gic_add_to_lr_pending(v, p_r);
> > +        }
> >  
> > -        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> >          gic_set_lr(i, p, GICH_LR_PENDING);
> >          list_del_init(&p->lr_queue);
> >          set_bit(i, &this_cpu(lr_mask));
> > -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +
> > +        lrs--;
> > +        if ( lrs == 0 )
> > +            break;
> >      }
> >  
> > +out:
> > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> >  }
> >  
> >  void gic_clear_pending_irqs(struct vcpu *v)
> > @@ -794,8 +825,40 @@ void gic_clear_pending_irqs(struct vcpu *v)
> >  
> >  int gic_events_need_delivery(void)
> >  {
> > -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> > -            this_cpu(lr_mask));
> > +    int mask_priority, lrs = nr_lrs;
> > +    int max_priority = 0xff, active_priority = 0xff;
> > +    struct vcpu *v = current;
> > +    struct pending_irq *p;
> > +    unsigned long flags;
> > +
> > +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> > +
> > +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> > +    /* TODO: We order the guest irqs by priority, but we don't change
> > +     * the priority of host irqs. */
> > +    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
> > +    {
> > +        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
> > +        {
> > +            if ( p->priority < active_priority )
> > +                active_priority = p->priority;
> > +        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
> > +            if ( p->priority < max_priority )
> > +                max_priority = p->priority;
> > +        }
> 
> Can't you do the comparison against mask_priority in this loop and
> simply return true as soon as you find an interrupt which needs an
> upcall?

We need to know the active_priority before we can return. Only if an
outstanding irq exists with priority higher than active_priority (and
mask_priority) we can return true.

 
> Can't you also stop searching when p->priority > mask_priority, since
> inflight_irqs is order you know all the rest of the list has lower
> (numerically higher) priority,

This is a good point.  We can also stop as soon as we find the irq with
GIC_IRQ_GUEST_ACTIVE, because we know than going forward all the others
are going to have a lower priority.
I'll make the changes.


> > +        lrs--;
> > +        if ( lrs == 0 )
> > +            break;
> > +    }
> > +
> > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +
> > +    if ( max_priority < active_priority &&
> > +         (max_priority >> 3) < mask_priority )
> 
> Why >> 3? Something to do with our emulation or the BPR?

Only 5 bits are implemented for the guest irq priority, both in GICH_LRs
and in GICH_VMCR. See gic_set_lr.


> > +        return 1;
> > +    else
> > +        return 0;
> >  }
> 
> Ian.
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities
  2014-04-06 19:27     ` Stefano Stabellini
@ 2014-04-07  9:43       ` Ian Campbell
  2014-04-07 10:30         ` Stefano Stabellini
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Campbell @ 2014-04-07  9:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, xen-devel

On Sun, 2014-04-06 at 20:27 +0100, Stefano Stabellini wrote:
> On Thu, 3 Apr 2014, Ian Campbell wrote:
> > On Wed, 2014-04-02 at 16:01 +0100, Stefano Stabellini wrote:
> > > gic_events_need_delivery should only return positive if an outstanding
> > > pending irq has an higher priority than the currently active irq and the
> > > priority mask.
> > > Rewrite the function by going through the priority ordered inflight and
> > > lr_queue lists.
> > > 
> > > In gic_restore_pending_irqs replace lower priority pending (and not
> > > active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> > > are available.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > 
> > > ---
> > > Changes in v5:
> > > - improve in code comments;
> > > - use list_for_each_entry_reverse instead of writing my own list walker.
> > > 
> > > Changes in v4:
> > > - in gic_events_need_delivery go through inflight_irqs and only consider
> > > enabled irqs.
> > > ---
> > >  xen/arch/arm/gic.c           |   77 ++++++++++++++++++++++++++++++++++++++----
> > >  xen/include/asm-arm/domain.h |    5 +--
> > >  xen/include/asm-arm/gic.h    |    3 ++
> > >  3 files changed, 76 insertions(+), 9 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > > index 611990d..7faa0e9 100644
> > > --- a/xen/arch/arm/gic.c
> > > +++ b/xen/arch/arm/gic.c
> > > @@ -721,6 +721,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
> > 
> > Looking at the complete function, a better name would include "try"
> > somewhere, since it doesn't unconditionally clear things. In fact it's
> > almost more of a "sync the h/w and s/w state" function, is it?
> 
> Yes, that is true. I'll rename it to gic_update_one_lr.

Thanks, I think that'll help clarify the stuff I asked about in patch #8
too.

> > >      p = irq_to_pending(v, irq);
> > >      if ( lr & GICH_LR_ACTIVE )
> > >      {
> > > +        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> > 
> > This appears to be unmotivated by the commit message. Is this just a
> > case of needing to more carefully manage the software ACTIVE state to
> > match the h/w?
> 
> The new GIC_IRQ_GUEST_ACTIVE bit is needed to track which one (if any)
> is the currenly active irq. We need the information to calculate
> priorities in gic_events_need_delivery.
> 
> I'll add a note to the commit message to clarify why we are introducing
> it.

OK, hopefully I'll understand the comment ;-)

> > If this lock free check is safe for some reason then please add a
> > comment.
> > 
> > > +
> > > +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > > +
> > >      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
> > >      {
> > >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> > > -        if ( i >= nr_lrs ) return;
> > > +        if ( i >= nr_lrs )
> > 
> > This where we find the LRs are full and we look for a lower priority
> > interrupt to evict from the LRs?
> 
> Yep.
> 
> 
> > Isn't this check ~= lr_all_full, which
> > would be more obvious, and avoid a find_first_zero_bit in the case where
> > things are full.
> 
> Consider that we are inside a list_for_each_entry_safe loop, adding
> lr_pending irqs to GICH_LR registers one by one: we need the
> find_first_zero_bit.

Shouldn't/couldn't you remember the answer from last time and start the
search from there. Currently it seems to search the whole bitmask every
time. (I think what I'm suggesting is roughly equivalent too passing i
instead of nr_lrs, with suitable initialisation and ++ of i at the
appropriate points)

> > > +        {
> > > +            list_for_each_entry_reverse( p_r,
> > > +                                         &v->arch.vgic.inflight_irqs,
> > > +                                         inflight )
> > > +            {
> > > +                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
> > > +                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
> > > +                    goto found;
> > > +                if ( &p_r->inflight == p->inflight.next )
> > > +                    goto out;
> > 
> > Can't we stop the search as soon as we find a higher priority interrupt
> > than the one we are trying to inject?
> 
> Actually we are already doing that. Currently we stop when we find a
> suitable lower priority irq, or when the next irq in inflight_irqs is
> the same that we are already processing from lr_pending. Given that both
> lr_pending and inflight_irqs are ordered by priority (a new irq is
> inserted between the last irq with the same priority and the first with
> lower priority) and that we are walking inflight_irqs in reverse, it is
> the same as you are suggesting.  We are stopping when the next in line
> is the irq we are already evaluating from lr_pending, therefore we know
> for sure that all the following irqs in inflight_irqs have the same or
> higher priority.
> 
> 
> > Is this nested loop O(n^2) in the number of inflight interrupts?
> 
> We only run the outer loop nr_lrs times (see the lrs == 0 check).

The important thing here is how many times we run the inner loop for
each of those nr_lrs times.

> Also:
> 
> - the lists are ordered by priority and we exploit that
> - lr_pending is a subset of inflight
> - we stop as soon as we exaust the lower priority irqs that we can evict
> - nr_lrs is very limited and we can at most do nr_lrs substitutions
> 
> In my tests thanks to the list ordering it actually takes only one step
> of the inner loop to find an irq to evict and the case when we need to
> do any evicting at all is very rare.
> 
> 
> > Is there a possible algorithm which involves picking the head from
> > whichever of inflight_irqs or pending_irqs has the highest priority,
> > until the LRs are full (or the lists are empty) and then putting the
> > remainder of pending back into the inflight list?
> > 
> > Or perhaps because we know that the two lists are sorted we can avoid a
> > complete search of inflight_irqs on every outer loop by remembering how
> > far we got last time and restarting there? i.e. if you gave up on an
> > interrupt with priority N+1 last time then there isn't much point in
> > checking for an interrupt with priority N this time around.
> 
> This is exactly what my previous version of the patch did:
> 
> http://marc.info/?l=xen-devel&m=139523245301114
> 
> but you asked me to change it :-)

You mean by my asking to use the macros I ruled this approach out?
Oops ;-)

> > > +            }
> > > +            goto out;
> > > +
> > > +found:
> > > +            i = p_r->lr;
> > > +            p_r->lr = GIC_INVALID_LR;
> > > +            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
> > > +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
> > > +            gic_add_to_lr_pending(v, p_r);
> > > +        }
> > >  
> > > -        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > >          gic_set_lr(i, p, GICH_LR_PENDING);
> > >          list_del_init(&p->lr_queue);
> > >          set_bit(i, &this_cpu(lr_mask));
> > > -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > +
> > > +        lrs--;
> > > +        if ( lrs == 0 )
> > > +            break;
> > >      }
> > >  
> > > +out:
> > > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > >  }
> > >  
> > >  void gic_clear_pending_irqs(struct vcpu *v)
> > > @@ -794,8 +825,40 @@ void gic_clear_pending_irqs(struct vcpu *v)
> > >  
> > >  int gic_events_need_delivery(void)
> > >  {
> > > -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> > > -            this_cpu(lr_mask));
> > > +    int mask_priority, lrs = nr_lrs;
> > > +    int max_priority = 0xff, active_priority = 0xff;
> > > +    struct vcpu *v = current;
> > > +    struct pending_irq *p;
> > > +    unsigned long flags;
> > > +
> > > +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> > > +
> > > +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > > +
> > > +    /* TODO: We order the guest irqs by priority, but we don't change
> > > +     * the priority of host irqs. */
> > > +    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
> > > +    {
> > > +        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
> > > +        {
> > > +            if ( p->priority < active_priority )
> > > +                active_priority = p->priority;
> > > +        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
> > > +            if ( p->priority < max_priority )
> > > +                max_priority = p->priority;
> > > +        }
> > 
> > Can't you do the comparison against mask_priority in this loop and
> > simply return true as soon as you find an interrupt which needs an
> > upcall?
> 
> We need to know the active_priority before we can return. Only if an
> outstanding irq exists with priority higher than active_priority (and
> mask_priority) we can return true.

"active_priority" here is the priority of the highest priority active
interrupt I think? You don't actually need to know what the highest is,
just that one exists which is higher than the threshold, don't you?

Likewise max_priority is the priority of highest priority interrupt
which the guest hasn't masked? Again don't you just need an existence
proof of one which exceeds the required threshold rather than the strict
highest?

> > Can't you also stop searching when p->priority > mask_priority, since
> > inflight_irqs is order you know all the rest of the list has lower
> > (numerically higher) priority,
> 
> This is a good point.  We can also stop as soon as we find the irq with
> GIC_IRQ_GUEST_ACTIVE, because we know than going forward all the others
> are going to have a lower priority.
> I'll make the changes.
> 
> 
> > > +        lrs--;
> > > +        if ( lrs == 0 )
> > > +            break;
> > > +    }
> > > +
> > > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > +
> > > +    if ( max_priority < active_priority &&
> > > +         (max_priority >> 3) < mask_priority )
> > 
> > Why >> 3? Something to do with our emulation or the BPR?
> 
> Only 5 bits are implemented for the guest irq priority, both in GICH_LRs
> and in GICH_VMCR. See gic_set_lr.

Looking at the docs it seems like the LRs have lower precision than
GICD_IPRIORITYRn, how odd. So this is a hardware limit rather an
artefact of our implementation? (I notice there are 3 reserved bits in
the LR right where the missing priority bits would sit... /me strokes
beard). I guess this ties in with GICH_APR only being 32 bits.

I'd like to see the >>3 replaced by a #define both here and gic_set_lr
please.

> 
> 
> > > +        return 1;
> > > +    else
> > > +        return 0;
> > >  }
> > 
> > Ian.
> > 
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities
  2014-04-07  9:43       ` Ian Campbell
@ 2014-04-07 10:30         ` Stefano Stabellini
  0 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-04-07 10:30 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, Stefano Stabellini

On Mon, 7 Apr 2014, Ian Campbell wrote:
> > > Isn't this check ~= lr_all_full, which
> > > would be more obvious, and avoid a find_first_zero_bit in the case where
> > > things are full.
> > 
> > Consider that we are inside a list_for_each_entry_safe loop, adding
> > lr_pending irqs to GICH_LR registers one by one: we need the
> > find_first_zero_bit.
> 
> Shouldn't/couldn't you remember the answer from last time and start the
> search from there. Currently it seems to search the whole bitmask every
> time. (I think what I'm suggesting is roughly equivalent too passing i
> instead of nr_lrs, with suitable initialisation and ++ of i at the
> appropriate points)

Yeah, I could use find_next_zero_bit instead of find_first_zero_bit.


> > > > +        {
> > > > +            list_for_each_entry_reverse( p_r,
> > > > +                                         &v->arch.vgic.inflight_irqs,
> > > > +                                         inflight )
> > > > +            {
> > > > +                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
> > > > +                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
> > > > +                    goto found;
> > > > +                if ( &p_r->inflight == p->inflight.next )
> > > > +                    goto out;
> > > 
> > > Can't we stop the search as soon as we find a higher priority interrupt
> > > than the one we are trying to inject?
> > 
> > Actually we are already doing that. Currently we stop when we find a
> > suitable lower priority irq, or when the next irq in inflight_irqs is
> > the same that we are already processing from lr_pending. Given that both
> > lr_pending and inflight_irqs are ordered by priority (a new irq is
> > inserted between the last irq with the same priority and the first with
> > lower priority) and that we are walking inflight_irqs in reverse, it is
> > the same as you are suggesting.  We are stopping when the next in line
> > is the irq we are already evaluating from lr_pending, therefore we know
> > for sure that all the following irqs in inflight_irqs have the same or
> > higher priority.
> > 
> > 
> > > Is this nested loop O(n^2) in the number of inflight interrupts?
> > 
> > We only run the outer loop nr_lrs times (see the lrs == 0 check).
> 
> The important thing here is how many times we run the inner loop for
> each of those nr_lrs times.
> 
> > Also:
> > 
> > - the lists are ordered by priority and we exploit that
> > - lr_pending is a subset of inflight
> > - we stop as soon as we exaust the lower priority irqs that we can evict
> > - nr_lrs is very limited and we can at most do nr_lrs substitutions
> > 
> > In my tests thanks to the list ordering it actually takes only one step
> > of the inner loop to find an irq to evict and the case when we need to
> > do any evicting at all is very rare.
> > 
> > 
> > > Is there a possible algorithm which involves picking the head from
> > > whichever of inflight_irqs or pending_irqs has the highest priority,
> > > until the LRs are full (or the lists are empty) and then putting the
> > > remainder of pending back into the inflight list?
> > > 
> > > Or perhaps because we know that the two lists are sorted we can avoid a
> > > complete search of inflight_irqs on every outer loop by remembering how
> > > far we got last time and restarting there? i.e. if you gave up on an
> > > interrupt with priority N+1 last time then there isn't much point in
> > > checking for an interrupt with priority N this time around.
> > 
> > This is exactly what my previous version of the patch did:
> > 
> > http://marc.info/?l=xen-devel&m=139523245301114
> > 
> > but you asked me to change it :-)
> 
> You mean by my asking to use the macros I ruled this approach out?
> Oops ;-)

I think I can find a way to do both.


> > > > +            }
> > > > +            goto out;
> > > > +
> > > > +found:
> > > > +            i = p_r->lr;
> > > > +            p_r->lr = GIC_INVALID_LR;
> > > > +            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
> > > > +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
> > > > +            gic_add_to_lr_pending(v, p_r);
> > > > +        }
> > > >  
> > > > -        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > > >          gic_set_lr(i, p, GICH_LR_PENDING);
> > > >          list_del_init(&p->lr_queue);
> > > >          set_bit(i, &this_cpu(lr_mask));
> > > > -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > > +
> > > > +        lrs--;
> > > > +        if ( lrs == 0 )
> > > > +            break;
> > > >      }
> > > >  
> > > > +out:
> > > > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > >  }
> > > >  
> > > >  void gic_clear_pending_irqs(struct vcpu *v)
> > > > @@ -794,8 +825,40 @@ void gic_clear_pending_irqs(struct vcpu *v)
> > > >  
> > > >  int gic_events_need_delivery(void)
> > > >  {
> > > > -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> > > > -            this_cpu(lr_mask));
> > > > +    int mask_priority, lrs = nr_lrs;
> > > > +    int max_priority = 0xff, active_priority = 0xff;
> > > > +    struct vcpu *v = current;
> > > > +    struct pending_irq *p;
> > > > +    unsigned long flags;
> > > > +
> > > > +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> > > > +
> > > > +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > > > +
> > > > +    /* TODO: We order the guest irqs by priority, but we don't change
> > > > +     * the priority of host irqs. */
> > > > +    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
> > > > +    {
> > > > +        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
> > > > +        {
> > > > +            if ( p->priority < active_priority )
> > > > +                active_priority = p->priority;
> > > > +        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
> > > > +            if ( p->priority < max_priority )
> > > > +                max_priority = p->priority;
> > > > +        }
> > > 
> > > Can't you do the comparison against mask_priority in this loop and
> > > simply return true as soon as you find an interrupt which needs an
> > > upcall?
> > 
> > We need to know the active_priority before we can return. Only if an
> > outstanding irq exists with priority higher than active_priority (and
> > mask_priority) we can return true.
> 
> "active_priority" here is the priority of the highest priority active
> interrupt I think? You don't actually need to know what the highest is,
> just that one exists which is higher than the threshold, don't you?

No, we need to know the highest because only if max_priority is higher
than active_priority then the function should return true.


> Likewise max_priority is the priority of highest priority interrupt
> which the guest hasn't masked? Again don't you just need an existence
> proof of one which exceeds the required threshold rather than the strict
> highest?

We need to know that exceeds the required threshold AND that it is
higher than active_priority.


> > > Can't you also stop searching when p->priority > mask_priority, since
> > > inflight_irqs is order you know all the rest of the list has lower
> > > (numerically higher) priority,
> > 
> > This is a good point.  We can also stop as soon as we find the irq with
> > GIC_IRQ_GUEST_ACTIVE, because we know than going forward all the others
> > are going to have a lower priority.
> > I'll make the changes.
> > 
> > 
> > > > +        lrs--;
> > > > +        if ( lrs == 0 )
> > > > +            break;
> > > > +    }
> > > > +
> > > > +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > > +
> > > > +    if ( max_priority < active_priority &&
> > > > +         (max_priority >> 3) < mask_priority )
> > > 
> > > Why >> 3? Something to do with our emulation or the BPR?
> > 
> > Only 5 bits are implemented for the guest irq priority, both in GICH_LRs
> > and in GICH_VMCR. See gic_set_lr.
> 
> Looking at the docs it seems like the LRs have lower precision than
> GICD_IPRIORITYRn, how odd. So this is a hardware limit rather an
> artefact of our implementation? (I notice there are 3 reserved bits in
> the LR right where the missing priority bits would sit... /me strokes
> beard). I guess this ties in with GICH_APR only being 32 bits.
> 
> I'd like to see the >>3 replaced by a #define both here and gic_set_lr
> please.

OK, I'll add a new patch at the end of the series.


> > 
> > 
> > > > +        return 1;
> > > > +    else
> > > > +        return 0;
> > > >  }
> > > 
> > > Ian.
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight
  2014-03-27 15:12 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
@ 2014-03-27 15:13 ` Stefano Stabellini
  0 siblings, 0 replies; 20+ messages in thread
From: Stefano Stabellini @ 2014-03-27 15:13 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, Ian.Campbell, Stefano Stabellini

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v3:
- do not use the PENDING and ACTIVE state for HW interrupts;
- unify the inflight and non-inflight code paths in
vgic_vcpu_inject_irq.
---
 xen/arch/arm/gic.c  |   37 ++++++++++++++++++++++++-------------
 xen/arch/arm/vgic.c |   33 +++++++++++++++++----------------
 2 files changed, 41 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 40e768e..218e3c6 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -680,6 +680,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
 {
     int i;
     unsigned long flags;
+    struct pending_irq *n = irq_to_pending(v, virtual_irq);
+
+    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
+    {
+        if ( v == current )
+            gic_clear_one_lr(v, n->lr);
+        return;
+    }
 
     spin_lock_irqsave(&gic.lock, flags);
 
@@ -705,20 +713,27 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
     struct pending_irq *p;
     uint32_t lr;
     int irq;
-    bool_t inflight;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
 
     lr = GICH[GICH_LR + i];
-    if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
     {
-        inflight = 0;
+        /* HW interrupts cannot be ACTIVE and PENDING */
+        if ( p->desc == NULL &&
+             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
-        irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-        spin_lock(&gic.lock);
-        p = irq_to_pending(v, irq);
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -726,12 +741,11 @@ static void gic_clear_one_lr(struct vcpu *v, int i)
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
-            inflight = 1;
             gic_raise_guest_irq(v, irq, p->priority);
-        }
-        spin_unlock(&gic.lock);
-        if ( !inflight )
+        } else
             list_del_init(&p->inflight);
+
+        spin_unlock(&gic.lock);
     }
 }
 
@@ -791,9 +805,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
 
     if ( !list_empty(&current->arch.vgic.lr_pending) && lr_all_full() )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3913cf5..dc3a75f 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -389,7 +389,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -696,14 +700,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
-    if ( !list_empty(&n->inflight) )
-    {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
-            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
-    }
-
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
@@ -715,21 +711,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     n->irq = irq;
     set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-    n->priority = priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
         gic_raise_guest_irq(v, irq, priority);
 
-    list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
+    if ( list_empty(&n->inflight) )
     {
-        if ( iter->priority > priority )
+        n->priority = priority;
+        list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
         {
-            list_add_tail(&n->inflight, &iter->inflight);
-            goto out;
+            if ( iter->priority > priority )
+            {
+                list_add_tail(&n->inflight, &iter->inflight);
+                goto out;
+            }
         }
-    }
-    list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+    } else if ( n->priority != priority )
+        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");
+
 out:
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     /* we have a new higher priority irq, inject it into the guest */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2014-04-07 10:30 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-02 15:00 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 01/10] xen/arm: no need to set HCR_VI when using the vgic to inject irqs Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 02/10] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 03/10] xen/arm: set GICH_HCR_UIE if all the LRs are in use Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 04/10] xen/arm: support HW interrupts, do not request maintenance_interrupts Stefano Stabellini
2014-04-03 10:55   ` Ian Campbell
2014-04-02 15:01 ` [PATCH v6 05/10] xen/arm: nr_lrs should be uint8_t Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 06/10] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 07/10] xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight Stefano Stabellini
2014-04-03 11:12   ` Ian Campbell
2014-04-06 16:46     ` Stefano Stabellini
2014-04-02 15:01 ` [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock Stefano Stabellini
2014-04-03 11:14   ` Ian Campbell
2014-04-02 15:01 ` [PATCH v6 10/10] xen/arm: gic_events_need_delivery and irq priorities Stefano Stabellini
2014-04-03 11:37   ` Ian Campbell
2014-04-06 19:27     ` Stefano Stabellini
2014-04-07  9:43       ` Ian Campbell
2014-04-07 10:30         ` Stefano Stabellini
  -- strict thread matches above, loose matches on Subject: below --
2014-03-27 15:12 [PATCH v6 0/10] remove maintenance interrupts Stefano Stabellini
2014-03-27 15:13 ` [PATCH v6 08/10] xen/arm: second irq injection while the first irq is still inflight Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.