All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/22] ARM: vGIC rework (attempt)
@ 2017-07-21 19:59 Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
                   ` (21 more replies)
  0 siblings, 22 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Hi,

this is the first part of the attempt to rewrite the VGIC to solve the
issues we discovered when adding the ITS emulation.
The problems we identified resulted in the following list of things that
need fixing:
1) introduce a per-IRQ lock
2) remove the IRQ rank scheme (of storing IRQ properties)
3) simplify the VCPU IRQ lists (getting rid of lr_queue)
4) introduce reference counting for struct pending_irq's
5) properly handle level triggered IRQs

This series addresses the first two points. I tried to move point 3) up
and fix that first, but that turned out to somehow depend on both
points 1) and 2), so we have this order now. Still having the two lists
makes things somewhat more complicated, though, but I think this is as
best as it can get. After addressing point 3) (in a later post) the end
result will look much better. I have some code for 3) and 5), mostly, but
we need to agree on the first steps first.

This is a bit of an open-heart surgery, as we try to change a locking
scheme while staying bisectable (both in terms of compilability *and*
runnability) and still having reviewable chunks.
To help reviewing I tried to split the patches up as much as possible.
Changes which are independent or introduce new functions are separate,
the motivation for some of them becomes apparent only later.
The rough idea of this series is to introduce the VGIC IRQ lock itself
first, then move each of the rank members into struct pending_irq, adjusting
the locking for that at the same time. To make the changes a bit smaller, I
fixed some read locks in separate patches after the "move" patch.
Also patch 09 adjusts the locking for setting the priority in the ITS,
which is technially needed in patch 08 already, but moved out for the sake
of reviewability. It might be squashed into patch 08 upon merging.

As hinted above still having to cope with two lists leads to some atrocities,
namely patch 03. This hideousness will vanish when the whole requirement of
queueing an IRQ in that early state will go away.

This is still somewhat work-in-progress, but I wanted to share the code
anyway, since I spent way too much time on it (rewriting it several times
on the way) and I am interested in some fresh pair of eyes to have a look.
Currently the target VCPU move (patch 18) leads to a deadlock and I just ran
out of time (before going on holidays) to debug this.
So if someone could have a look to see if this approach in general looks
good, I'd be grateful. I know that there is optimization potential (some
functions can surely be refactored), but I'd rather do one step after the
other.

Cheers,
Andre.

Andre Przywara (22):
  ARM: vGIC: introduce and initialize pending_irq lock
  ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock
  ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq()
  ARM: vGIC: rename pending_irq->priority to cur_priority
  ARM: vITS: rename pending_irq->lpi_priority to priority
  ARM: vGIC: introduce locking routines for multiple IRQs
  ARM: vGIC: introduce priority setter/getter
  ARM: vGIC: move virtual IRQ priority from rank to pending_irq
  ARM: vITS: protect LPI priority update with pending_irq lock
  ARM: vGIC: protect gic_set_lr() with pending_irq lock
  ARM: vGIC: protect gic_events_need_delivery() with pending_irq lock
  ARM: vGIC: protect gic_update_one_lr() with pending_irq lock
  ARM: vITS: remove no longer needed lpi_priority wrapper
  ARM: vGIC: move virtual IRQ configuration from rank to pending_irq
  ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq
  ARM: vITS: rename lpi_vcpu_id to vcpu_id
  ARM: vGIC: introduce vgic_lock_vcpu_irq()
  ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq
  ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of
    vcpu
  ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq
  ARM: vITS: injecting LPIs: use pending_irq lock
  ARM: vGIC: remove remaining irq_rank code

 xen/arch/arm/gic-v2.c        |   2 +-
 xen/arch/arm/gic-v3-lpi.c    |  14 +-
 xen/arch/arm/gic-v3.c        |   2 +-
 xen/arch/arm/gic.c           |  96 ++++----
 xen/arch/arm/vgic-v2.c       | 161 ++++---------
 xen/arch/arm/vgic-v3-its.c   |  42 ++--
 xen/arch/arm/vgic-v3.c       | 182 +++++----------
 xen/arch/arm/vgic.c          | 521 +++++++++++++++++++++++++++----------------
 xen/include/asm-arm/domain.h |   6 +-
 xen/include/asm-arm/gic.h    |   2 +-
 xen/include/asm-arm/vgic.h   | 114 +++-------
 11 files changed, 540 insertions(+), 602 deletions(-)

-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-10 15:19   ` Julien Grall
  2017-08-10 15:35   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 02/22] ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock Andre Przywara
                   ` (20 subsequent siblings)
  21 siblings, 2 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Currently we protect the pending_irq structure with the corresponding
VGIC VCPU lock. There are problems in certain corner cases (for
instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
which will protect the consistency of this structure independent from
any VCPU.
For now this just introduces and initializes the lock, also adds
wrapper macros to simplify its usage (and help debugging).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        |  1 +
 xen/include/asm-arm/vgic.h | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1e5107b..38dacd3 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
     memset(p, 0, sizeof(*p));
     INIT_LIST_HEAD(&p->inflight);
     INIT_LIST_HEAD(&p->lr_queue);
+    spin_lock_init(&p->lock);
     p->irq = virq;
     p->lpi_vcpu_id = INVALID_VCPU_ID;
 }
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index d4ed23d..1c38b9a 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -90,6 +90,14 @@ struct pending_irq
      * TODO: when implementing irq migration, taking only the current
      * vgic lock is not going to be enough. */
     struct list_head lr_queue;
+    /* The lock protects the consistency of this structure. A single status bit
+     * can be read and/or set without holding the lock using the atomic
+     * set_bit/clear_bit/test_bit functions, however accessing multiple bits or
+     * relating to other members in this struct requires the lock.
+     * The list_head members are protected by their corresponding VCPU lock,
+     * it is not sufficient to hold this pending_irq lock here to query or
+     * change list order or affiliation. */
+    spinlock_t lock;
 };
 
 #define NR_INTERRUPT_PER_RANK   32
@@ -156,6 +164,9 @@ struct vgic_ops {
 #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
 #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
 
+#define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags)
+#define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock, flags)
+
 #define vgic_lock_rank(v, r, flags)   spin_lock_irqsave(&(r)->lock, flags)
 #define vgic_unlock_rank(v, r, flags) spin_unlock_irqrestore(&(r)->lock, flags)
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 02/22] ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq() Andre Przywara
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

So far the rank lock is protecting the physical IRQ routing for a
particular virtual IRQ (though this doesn't seem to be documented
anywhere). So although these functions don't really touch the rank
structure, the lock prevents them from running concurrently.
This seems a bit like a kludge, so as we now have our newly introduced
per-IRQ lock, we can use that instead to get a more natural protection
(and remove the first rank user).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6c803bf..2c99d71 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -139,9 +139,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
     unsigned long flags;
     /* Use vcpu0 to retrieve the pending_irq struct. Given that we only
      * route SPIs to guests, it doesn't make any difference. */
-    struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq);
-    struct vgic_irq_rank *rank = vgic_rank_irq(v_target, virq);
-    struct pending_irq *p = irq_to_pending(v_target, virq);
+    struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
     int res = -EBUSY;
 
     ASSERT(spin_is_locked(&desc->lock));
@@ -150,7 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
     ASSERT(virq < vgic_num_irqs(d));
     ASSERT(!is_lpi(virq));
 
-    vgic_lock_rank(v_target, rank, flags);
+    vgic_irq_lock(p, flags);
 
     if ( p->desc ||
          /* The VIRQ should not be already enabled by the guest */
@@ -168,7 +166,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
     res = 0;
 
 out:
-    vgic_unlock_rank(v_target, rank, flags);
+    vgic_irq_unlock(p, flags);
 
     return res;
 }
@@ -177,9 +175,7 @@ out:
 int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
                               struct irq_desc *desc)
 {
-    struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq);
-    struct vgic_irq_rank *rank = vgic_rank_irq(v_target, virq);
-    struct pending_irq *p = irq_to_pending(v_target, virq);
+    struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
     unsigned long flags;
 
     ASSERT(spin_is_locked(&desc->lock));
@@ -187,7 +183,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
     ASSERT(p->desc == desc);
     ASSERT(!is_lpi(virq));
 
-    vgic_lock_rank(v_target, rank, flags);
+    vgic_irq_lock(p, flags);
 
     if ( d->is_dying )
     {
@@ -207,7 +203,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
         if ( test_bit(_IRQ_INPROGRESS, &desc->status) ||
              !test_bit(_IRQ_DISABLED, &desc->status) )
         {
-            vgic_unlock_rank(v_target, rank, flags);
+            vgic_irq_unlock(p, flags);
             return -EBUSY;
         }
     }
@@ -217,7 +213,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
 
     p->desc = NULL;
 
-    vgic_unlock_rank(v_target, rank, flags);
+    vgic_irq_unlock(p, flags);
 
     return 0;
 }
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq()
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 02/22] ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-10 16:28   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 04/22] ARM: vGIC: rename pending_irq->priority to cur_priority Andre Przywara
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Currently there is a gic_raise_inflight_irq(), which serves the very
special purpose of handling a newly injected interrupt while an older
one is still handled. This has only one user, in vgic_vcpu_inject_irq().

Now with the introduction of the pending_irq lock this will later on
result in a nasty deadlock, which can only be solved properly by
actually embedding the function into the caller (and dropping the lock
later in-between).

This has the admittedly hideous consequence of needing to export
gic_update_one_lr(), but this will go away in a later stage of a rework.
In this respect this patch is more a temporary kludge.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c        | 30 +-----------------------------
 xen/arch/arm/vgic.c       | 11 ++++++++++-
 xen/include/asm-arm/gic.h |  2 +-
 3 files changed, 12 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 2c99d71..5bd66a2 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -44,8 +44,6 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
 
 #undef GIC_DEBUG
 
-static void gic_update_one_lr(struct vcpu *v, int i);
-
 static const struct gic_hw_operations *gic_hw_ops;
 
 void register_gic_ops(const struct gic_hw_operations *ops)
@@ -416,32 +414,6 @@ void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p)
     gic_remove_from_lr_pending(v, p);
 }
 
-void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
-{
-    struct pending_irq *n = irq_to_pending(v, virtual_irq);
-
-    /* If an LPI has been removed meanwhile, there is nothing left to raise. */
-    if ( unlikely(!n) )
-        return;
-
-    ASSERT(spin_is_locked(&v->arch.vgic.lock));
-
-    /* Don't try to update the LR if the interrupt is disabled */
-    if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
-        return;
-
-    if ( list_empty(&n->lr_queue) )
-    {
-        if ( v == current )
-            gic_update_one_lr(v, n->lr);
-    }
-#ifdef GIC_DEBUG
-    else
-        gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
-                 virtual_irq, v->domain->domain_id, v->vcpu_id);
-#endif
-}
-
 /*
  * Find an unused LR to insert an IRQ into, starting with the LR given
  * by @lr. If this new interrupt is a PRISTINE LPI, scan the other LRs to
@@ -503,7 +475,7 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
     gic_add_to_lr_pending(v, p);
 }
 
-static void gic_update_one_lr(struct vcpu *v, int i)
+void gic_update_one_lr(struct vcpu *v, int i)
 {
     struct pending_irq *p;
     int irq;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 38dacd3..7b122cd 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -536,7 +536,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 
     if ( !list_empty(&n->inflight) )
     {
-        gic_raise_inflight_irq(v, virq);
+        bool update = test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) &&
+                      list_empty(&n->lr_queue) && (v == current);
+
+        if ( update )
+            gic_update_one_lr(v, n->lr);
+#ifdef GIC_DEBUG
+        else
+            gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
+                     n->irq, v->domain->domain_id, v->vcpu_id);
+#endif
         goto out;
     }
 
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6203dc5..cf8b8fb 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -237,12 +237,12 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
 
 extern void gic_inject(void);
 extern void gic_clear_pending_irqs(struct vcpu *v);
+extern void gic_update_one_lr(struct vcpu *v, int lr);
 extern int gic_events_need_delivery(void);
 
 extern void init_maintenance_interrupt(void);
 extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int priority);
-extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq);
 extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
 extern void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 04/22] ARM: vGIC: rename pending_irq->priority to cur_priority
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (2 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq() Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 05/22] ARM: vITS: rename pending_irq->lpi_priority to priority Andre Przywara
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

In preparation for storing the virtual interrupt priority in the struct
pending_irq, rename the existing "priority" member to "cur_priority".
This is to signify that this is the current priority of an interrupt
which has been injected to a VCPU. Once this happened, its priority must
stay fixed at this value, subsequenct MMIO accesses to change the priority
can only affect newly triggered interrupts.
Also since the priority is a sorting criteria for the inflight list, it
must not change when it's on a VCPUs list.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v2.c      |  2 +-
 xen/arch/arm/gic-v3.c      |  2 +-
 xen/arch/arm/gic.c         | 10 +++++-----
 xen/arch/arm/vgic.c        |  6 +++---
 xen/include/asm-arm/vgic.h |  2 +-
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index cbe71a9..735e23d 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -437,7 +437,7 @@ static void gicv2_update_lr(int lr, const struct pending_irq *p,
     BUG_ON(lr < 0);
 
     lr_reg = (((state & GICH_V2_LR_STATE_MASK) << GICH_V2_LR_STATE_SHIFT)  |
-              ((GIC_PRI_TO_GUEST(p->priority) & GICH_V2_LR_PRIORITY_MASK)
+              ((GIC_PRI_TO_GUEST(p->cur_priority) & GICH_V2_LR_PRIORITY_MASK)
                                              << GICH_V2_LR_PRIORITY_SHIFT) |
               ((p->irq & GICH_V2_LR_VIRTUAL_MASK) << GICH_V2_LR_VIRTUAL_SHIFT));
 
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index f990eae..449bd55 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -961,7 +961,7 @@ static void gicv3_update_lr(int lr, const struct pending_irq *p,
     if ( current->domain->arch.vgic.version == GIC_V3 )
         val |= GICH_LR_GRP1;
 
-    val |= ((uint64_t)p->priority & 0xff) << GICH_LR_PRIORITY_SHIFT;
+    val |= ((uint64_t)p->cur_priority & 0xff) << GICH_LR_PRIORITY_SHIFT;
     val |= ((uint64_t)p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT;
 
    if ( p->desc != NULL )
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 5bd66a2..8dec736 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -389,7 +389,7 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
 
     list_for_each_entry ( iter, &v->arch.vgic.lr_pending, lr_queue )
     {
-        if ( iter->priority > n->priority )
+        if ( iter->cur_priority > n->cur_priority )
         {
             list_add_tail(&n->lr_queue, &iter->lr_queue);
             return;
@@ -542,7 +542,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) &&
              !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
-            gic_raise_guest_irq(v, irq, p->priority);
+            gic_raise_guest_irq(v, irq, p->cur_priority);
         else {
             list_del_init(&p->inflight);
             /*
@@ -610,7 +610,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
             /* No more free LRs: find a lower priority irq to evict */
             list_for_each_entry_reverse( p_r, inflight_r, inflight )
             {
-                if ( p_r->priority == p->priority )
+                if ( p_r->cur_priority == p->cur_priority )
                     goto out;
                 if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
                      !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
@@ -676,9 +676,9 @@ int gic_events_need_delivery(void)
      * ordered by priority */
     list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        if ( GIC_PRI_TO_GUEST(p->priority) >= mask_priority )
+        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= mask_priority )
             goto out;
-        if ( GIC_PRI_TO_GUEST(p->priority) >= active_priority )
+        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= active_priority )
             goto out;
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
         {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7b122cd..21b545e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -395,7 +395,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         p = irq_to_pending(v_target, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v_target, irq, p->priority);
+            gic_raise_guest_irq(v_target, irq, p->cur_priority);
         spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
         if ( p->desc != NULL )
         {
@@ -550,7 +550,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
     }
 
     priority = vgic_get_virq_priority(v, virq);
-    n->priority = priority;
+    n->cur_priority = priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
@@ -558,7 +558,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 
     list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
     {
-        if ( iter->priority > priority )
+        if ( iter->cur_priority > priority )
         {
             list_add_tail(&n->inflight, &iter->inflight);
             goto out;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 1c38b9a..0df4ac7 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -78,7 +78,7 @@ struct pending_irq
     unsigned int irq;
 #define GIC_INVALID_LR         (uint8_t)~0
     uint8_t lr;
-    uint8_t priority;
+    uint8_t cur_priority;       /* Holds the priority of an injected IRQ. */
     uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
     uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
     /* inflight is used to append instances of pending_irq to
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 05/22] ARM: vITS: rename pending_irq->lpi_priority to priority
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (3 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 04/22] ARM: vGIC: rename pending_irq->priority to cur_priority Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 06/22] ARM: vGIC: introduce locking routines for multiple IRQs Andre Przywara
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Since we will soon store a virtual IRQ's priority in struct pending_irq,
generalise the existing storage for an LPI's priority to cover all IRQs.
This just renames "lpi_priority" to "priority", but doesn't change
anything else yet.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 4 ++--
 xen/arch/arm/vgic-v3.c     | 2 +-
 xen/include/asm-arm/vgic.h | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 9ef792f..66095d4 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -419,7 +419,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
     if ( ret )
         return ret;
 
-    write_atomic(&p->lpi_priority, property & LPI_PROP_PRIO_MASK);
+    write_atomic(&p->priority, property & LPI_PROP_PRIO_MASK);
 
     if ( property & LPI_PROP_ENABLED )
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -445,7 +445,7 @@ static void update_lpi_vgic_status(struct vcpu *v, struct pending_irq *p)
     {
         if ( !list_empty(&p->inflight) &&
              !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v, p->irq, p->lpi_priority);
+            gic_raise_guest_irq(v, p->irq, p->priority);
     }
     else
         gic_remove_from_lr_pending(v, p);
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 48c7682..ad9019e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1784,7 +1784,7 @@ static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
 
     ASSERT(p);
 
-    return p->lpi_priority;
+    return p->priority;
 }
 
 static const struct vgic_ops v3_ops = {
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 0df4ac7..27b5e37 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -79,7 +79,7 @@ struct pending_irq
 #define GIC_INVALID_LR         (uint8_t)~0
     uint8_t lr;
     uint8_t cur_priority;       /* Holds the priority of an injected IRQ. */
-    uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
+    uint8_t priority;           /* Holds the priority for any new IRQ. */
     uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 06/22] ARM: vGIC: introduce locking routines for multiple IRQs
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (4 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 05/22] ARM: vITS: rename pending_irq->lpi_priority to priority Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-07-21 19:59 ` [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter Andre Przywara
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

When replacing the rank lock with individual per-IRQs lock soon, we will
still need the ability to lock multiple IRQs.
Provide two helper routines which lock and unlock a number of consecutive
IRQs in the right order.
Forward-looking the locking function fills an array of pending_irq
pointers, so the lookup has only to be done once.
These routines expect that local_irq_save() has been called before the
lock routine and the respective local_irq_restore() after the unlock
function.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        | 20 ++++++++++++++++++++
 xen/include/asm-arm/vgic.h |  4 ++++
 2 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 21b545e..434b7e2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -375,6 +375,26 @@ static inline unsigned int vgic_get_virq_type(struct vcpu *v, int n, int index)
         return IRQ_TYPE_LEVEL_HIGH;
 }
 
+void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs,
+                    unsigned int first_irq, struct pending_irq **pirqs)
+{
+    unsigned int i;
+
+    for ( i = 0; i < nrirqs; i++ )
+    {
+        pirqs[i] = irq_to_pending(v, first_irq + i);
+        spin_lock(&pirqs[i]->lock);
+    }
+}
+
+void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs)
+{
+    int i;
+
+    for ( i = nrirqs - 1; i >= 0; i-- )
+        spin_unlock(&pirqs[i]->lock);
+}
+
 void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 {
     const unsigned long mask = r;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 27b5e37..ecf4969 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -194,6 +194,10 @@ static inline int REG_RANK_NR(int b, uint32_t n)
     }
 }
 
+void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq,
+                    struct pending_irq **pirqs);
+void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
+
 enum gic_sgi_mode;
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (5 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 06/22] ARM: vGIC: introduce locking routines for multiple IRQs Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-11 14:10   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq Andre Przywara
                   ` (14 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Since the GICs MMIO access always covers a number of IRQs at once,
introduce wrapper functions which loop over those IRQs, take their
locks and read or update the priority values.
This will be used in a later patch.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/vgic.h |  5 +++++
 2 files changed, 42 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 434b7e2..b2c9632 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
     return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
 }
 
+#define MAX_IRQS_PER_IPRIORITYR 4
+uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
+                                 unsigned int first_irq)
+{
+    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
+    unsigned long flags;
+    uint32_t ret = 0, i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
+
+    for ( i = 0; i < nrirqs; i++ )
+        ret |= pirqs[i]->priority << (i * 8);
+
+    vgic_unlock_irqs(pirqs, nrirqs);
+    local_irq_restore(flags);
+
+    return ret;
+}
+
+void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
+                             unsigned int first_irq, uint32_t value)
+{
+    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
+    unsigned long flags;
+    unsigned int i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
+
+    for ( i = 0; i < nrirqs; i++, value >>= 8 )
+        pirqs[i]->priority = value & 0xff;
+
+    vgic_unlock_irqs(pirqs, nrirqs);
+    local_irq_restore(flags);
+}
+
 bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
 {
     unsigned long flags;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index ecf4969..f3791c8 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -198,6 +198,11 @@ void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq,
                     struct pending_irq **pirqs);
 void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
 
+uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
+                                 unsigned int first_irq);
+void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
+                             unsigned int first_irq, uint32_t reg);
+
 enum gic_sgi_mode;
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (6 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-11 14:39   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock Andre Przywara
                   ` (13 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

So far a virtual interrupt's priority is stored in the irq_rank
structure, which covers multiple IRQs and has a single lock for this
group.
Generalize the already existing priority variable in struct pending_irq
to not only cover LPIs, but every IRQ. Access to this value is protected
by the per-IRQ lock.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c     | 34 ++++++----------------------------
 xen/arch/arm/vgic-v3.c     | 36 ++++++++----------------------------
 xen/arch/arm/vgic.c        | 41 +++++++++++++++++------------------------
 xen/include/asm-arm/vgic.h | 10 ----------
 4 files changed, 31 insertions(+), 90 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index cf4ab89..ed7ff3b 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -171,6 +171,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
     struct vgic_irq_rank *rank;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
     unsigned long flags;
+    unsigned int irq;
 
     perfc_incr(vgicd_reads);
 
@@ -250,22 +251,10 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
         goto read_as_zero;
 
     case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
-    {
-        uint32_t ipriorityr;
-        uint8_t rank_index;
-
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        rank_index = REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
-
-        vgic_lock_rank(v, rank, flags);
-        ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
-        vgic_unlock_rank(v, rank, flags);
-        *r = vreg_reg32_extract(ipriorityr, info);
-
+        irq = gicd_reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
+        *r = vgic_fetch_irq_priority(v, irq, (dabt.size == DABT_BYTE) ? 1 : 4);
         return 1;
-    }
 
     case VREG32(0x7FC):
         goto read_reserved;
@@ -415,6 +404,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
     uint32_t tr;
     unsigned long flags;
+    unsigned int irq;
 
     perfc_incr(vgicd_writes);
 
@@ -498,23 +488,11 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
         goto write_ignore_32;
 
     case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
-    {
-        uint32_t *ipriorityr, priority;
-
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
-        if ( rank == NULL) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        ipriorityr = &rank->ipriorityr[REG_RANK_INDEX(8,
-                                                      gicd_reg - GICD_IPRIORITYR,
-                                                      DABT_WORD)];
-        priority = ACCESS_ONCE(*ipriorityr);
-        vreg_reg32_update(&priority, r, info);
-        ACCESS_ONCE(*ipriorityr) = priority;
 
-        vgic_unlock_rank(v, rank, flags);
+        irq = gicd_reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
+        vgic_store_irq_priority(v, (dabt.size == DABT_BYTE) ? 1 : 4, irq, r);
         return 1;
-    }
 
     case VREG32(0x7FC):
         goto write_reserved;
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index ad9019e..e58e77e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -677,6 +677,7 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
     struct hsr_dabt dabt = info->dabt;
     struct vgic_irq_rank *rank;
     unsigned long flags;
+    unsigned int irq;
 
     switch ( reg )
     {
@@ -714,23 +715,11 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
         goto read_as_zero;
 
     case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
-    {
-        uint32_t ipriorityr;
-        uint8_t rank_index;
-
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        rank_index = REG_RANK_INDEX(8, reg - GICD_IPRIORITYR, DABT_WORD);
-
-        vgic_lock_rank(v, rank, flags);
-        ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
-        vgic_unlock_rank(v, rank, flags);
-
-        *r = vreg_reg32_extract(ipriorityr, info);
-
+        irq = reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_priority(v, irq, (dabt.size == DABT_BYTE) ? 1 : 4);
         return 1;
-    }
 
     case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
     {
@@ -774,6 +763,7 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
     struct vgic_irq_rank *rank;
     uint32_t tr;
     unsigned long flags;
+    unsigned int irq;
 
     switch ( reg )
     {
@@ -831,21 +821,11 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
         goto write_ignore_32;
 
     case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
-    {
-        uint32_t *ipriorityr, priority;
-
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
-        if ( rank == NULL ) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        ipriorityr = &rank->ipriorityr[REG_RANK_INDEX(8, reg - GICD_IPRIORITYR,
-                                                      DABT_WORD)];
-        priority = ACCESS_ONCE(*ipriorityr);
-        vreg_reg32_update(&priority, r, info);
-        ACCESS_ONCE(*ipriorityr) = priority;
-        vgic_unlock_rank(v, rank, flags);
+        irq = reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_priority(v, (dabt.size == DABT_BYTE) ? 1 : 4, irq, r);
         return 1;
-    }
 
     case VREG32(GICD_ICFGR): /* Restricted to configure SGIs */
         goto write_ignore_32;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index b2c9632..ddcd99b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -231,18 +231,6 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
     return v->domain->vcpu[target];
 }
 
-static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
-{
-    struct vgic_irq_rank *rank;
-
-    /* LPIs don't have a rank, also store their priority separately. */
-    if ( is_lpi(virq) )
-        return v->domain->arch.vgic.handler->lpi_get_priority(v->domain, virq);
-
-    rank = vgic_rank_irq(v, virq);
-    return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
-}
-
 #define MAX_IRQS_PER_IPRIORITYR 4
 uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
                                  unsigned int first_irq)
@@ -567,37 +555,40 @@ void vgic_clear_pending_irqs(struct vcpu *v)
 
 void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 {
-    uint8_t priority;
     struct pending_irq *iter, *n;
-    unsigned long flags;
+    unsigned long flags, vcpu_flags;
     bool running;
 
-    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);
 
     n = irq_to_pending(v, virq);
     /* If an LPI has been removed, there is nothing to inject here. */
     if ( unlikely(!n) )
     {
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
         return;
     }
 
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
         return;
     }
 
+    vgic_irq_lock(n, flags);
+
     set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
 
     if ( !list_empty(&n->inflight) )
     {
         bool update = test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) &&
                       list_empty(&n->lr_queue) && (v == current);
+        int lr = ACCESS_ONCE(n->lr);
 
+        vgic_irq_unlock(n, flags);
         if ( update )
-            gic_update_one_lr(v, n->lr);
+            gic_update_one_lr(v, lr);
 #ifdef GIC_DEBUG
         else
             gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
@@ -606,24 +597,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
         goto out;
     }
 
-    priority = vgic_get_virq_priority(v, virq);
-    n->cur_priority = priority;
+    n->cur_priority = n->priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
-        gic_raise_guest_irq(v, virq, priority);
+        gic_raise_guest_irq(v, virq, n->cur_priority);
 
     list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
     {
-        if ( iter->cur_priority > priority )
+        if ( iter->cur_priority > n->cur_priority )
         {
             list_add_tail(&n->inflight, &iter->inflight);
-            goto out;
+            goto out_unlock_irq;
         }
     }
     list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+
+out_unlock_irq:
+    vgic_irq_unlock(n, flags);
 out:
-    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
     /* we have a new higher priority irq, inject it into the guest */
     running = v->is_running;
     vcpu_unblock(v);
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index f3791c8..59d52c6 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -113,16 +113,6 @@ struct vgic_irq_rank {
     uint32_t icfg[2];
 
     /*
-     * Provide efficient access to the priority of an vIRQ while keeping
-     * the emulation simple.
-     * Note, this is working fine as long as Xen is using little endian.
-     */
-    union {
-        uint8_t priority[32];
-        uint32_t ipriorityr[8];
-    };
-
-    /*
      * It's more convenient to store a target VCPU per vIRQ
      * than the register ITARGETSR/IROUTER itself.
      * Use atomic operations to read/write the vcpu fields to avoid
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (7 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-11 14:43   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() " Andre Przywara
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

As the priority value is now officially a member of struct pending_irq,
we need to take its lock when manipulating it via ITS commands.
Make sure we take the IRQ lock after the VCPU lock when we need both.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 66095d4..705708a 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -402,6 +402,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
     uint8_t property;
     int ret;
 
+    ASSERT(spin_is_locked(&p->lock));
     /*
      * If no redistributor has its LPIs enabled yet, we can't access the
      * property table. In this case we just can't update the properties,
@@ -419,7 +420,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
     if ( ret )
         return ret;
 
-    write_atomic(&p->priority, property & LPI_PROP_PRIO_MASK);
+    p->priority = property & LPI_PROP_PRIO_MASK;
 
     if ( property & LPI_PROP_ENABLED )
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -457,7 +458,7 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
     uint32_t devid = its_cmd_get_deviceid(cmdptr);
     uint32_t eventid = its_cmd_get_id(cmdptr);
     struct pending_irq *p;
-    unsigned long flags;
+    unsigned long flags, vcpu_flags;
     struct vcpu *vcpu;
     uint32_t vlpi;
     int ret = -1;
@@ -485,7 +486,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
     if ( unlikely(!p) )
         goto out_unlock_its;
 
-    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags);
+    vgic_irq_lock(p, flags);
 
     /* Read the property table and update our cached status. */
     if ( update_lpi_property(d, p) )
@@ -497,7 +499,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
     ret = 0;
 
 out_unlock:
-    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+    vgic_irq_unlock(p, flags);
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags);
 
 out_unlock_its:
     spin_unlock(&its->its_lock);
@@ -517,7 +520,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
     struct pending_irq *pirqs[16];
     uint64_t vlpi = 0;          /* 64-bit to catch overflows */
     unsigned int nr_lpis, i;
-    unsigned long flags;
+    unsigned long flags, vcpu_flags;
     int ret = 0;
 
     /*
@@ -542,7 +545,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
     vcpu = get_vcpu_from_collection(its, collid);
     spin_unlock(&its->its_lock);
 
-    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags);
     read_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
 
     do
@@ -555,9 +558,13 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
 
         for ( i = 0; i < nr_lpis; i++ )
         {
+            vgic_irq_lock(pirqs[i], flags);
             /* We only care about LPIs on our VCPU. */
             if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
+            {
+                vgic_irq_unlock(pirqs[i], flags);
                 continue;
+            }
 
             vlpi = pirqs[i]->irq;
             /* If that fails for a single LPI, carry on to handle the rest. */
@@ -566,6 +573,8 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
                 update_lpi_vgic_status(vcpu, pirqs[i]);
             else
                 ret = err;
+
+            vgic_irq_unlock(pirqs[i], flags);
         }
     /*
      * Loop over the next gang of pending_irqs until we reached the end of
@@ -576,7 +585,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
               (nr_lpis == ARRAY_SIZE(pirqs)) );
 
     read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
-    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags);
 
     return ret;
 }
@@ -712,6 +721,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
     uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
     uint16_t collid = its_cmd_get_collection(cmdptr);
     struct pending_irq *pirq;
+    unsigned long flags;
     struct vcpu *vcpu = NULL;
     int ret = -1;
 
@@ -765,7 +775,9 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
      * We don't need the VGIC VCPU lock here, because the pending_irq isn't
      * in the radix tree yet.
      */
+    vgic_irq_lock(pirq, flags);
     ret = update_lpi_property(its->d, pirq);
+    vgic_irq_unlock(pirq, flags);
     if ( ret )
         goto out_remove_host_entry;
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() with pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (8 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-15 10:59   ` Julien Grall
  2017-07-21 19:59 ` [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() " Andre Przywara
                   ` (11 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

When putting a (pending) IRQ into an LR, we should better make sure that
no-one changes it behind our back. So make sure we take the pending_irq
lock. This bubbles up to all users of gic_add_to_lr_pending() and
gic_raise_guest_irq().

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 8dec736..df89530 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -383,6 +383,7 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
     struct pending_irq *iter;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
+    ASSERT(spin_is_locked(&n->lock));
 
     if ( !list_empty(&n->lr_queue) )
         return;
@@ -480,6 +481,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
     struct pending_irq *p;
     int irq;
     struct gic_lr lr_val;
+    unsigned long flags;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
     ASSERT(!local_irq_is_enabled());
@@ -534,6 +536,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
         gic_hw_ops->clear_lr(i);
         clear_bit(i, &this_cpu(lr_mask));
 
+        vgic_irq_lock(p, flags);
         if ( p->desc != NULL )
             clear_bit(_IRQ_INPROGRESS, &p->desc->status);
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -559,6 +562,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
                 clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
             }
         }
+        vgic_irq_unlock(p, flags);
     }
 }
 
@@ -592,11 +596,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
     int lr = 0;
     struct pending_irq *p, *t, *p_r;
     struct list_head *inflight_r;
-    unsigned long flags;
+    unsigned long flags, vcpu_flags;
     unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
     int lrs = nr_lrs;
 
-    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);
 
     if ( list_empty(&v->arch.vgic.lr_pending) )
         goto out;
@@ -621,16 +625,20 @@ static void gic_restore_pending_irqs(struct vcpu *v)
             goto out;
 
 found:
+            vgic_irq_lock(p_r, flags);
             lr = p_r->lr;
             p_r->lr = GIC_INVALID_LR;
             set_bit(GIC_IRQ_GUEST_QUEUED, &p_r->status);
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
             gic_add_to_lr_pending(v, p_r);
             inflight_r = &p_r->inflight;
+            vgic_irq_unlock(p_r, flags);
         }
 
+        vgic_irq_lock(p, flags);
         gic_set_lr(lr, p, GICH_LR_PENDING);
         list_del_init(&p->lr_queue);
+        vgic_irq_unlock(p, flags);
         set_bit(lr, &this_cpu(lr_mask));
 
         /* We can only evict nr_lrs entries */
@@ -640,7 +648,7 @@ found:
     }
 
 out:
-    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() with pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (9 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() " Andre Przywara
@ 2017-07-21 19:59 ` Andre Przywara
  2017-08-15 11:11   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() " Andre Przywara
                   ` (10 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 19:59 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

gic_events_need_delivery() reads the cur_priority field twice, also
relies on the consistency of status bits.
So it should take pending_irq lock.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index df89530..9637682 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -666,7 +666,7 @@ int gic_events_need_delivery(void)
 {
     struct vcpu *v = current;
     struct pending_irq *p;
-    unsigned long flags;
+    unsigned long flags, vcpu_flags;
     const unsigned long apr = gic_hw_ops->read_apr(0);
     int mask_priority;
     int active_priority;
@@ -675,7 +675,7 @@ int gic_events_need_delivery(void)
     mask_priority = gic_hw_ops->read_vmcr_priority();
     active_priority = find_next_bit(&apr, 32, 0);
 
-    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);
 
     /* TODO: We order the guest irqs by priority, but we don't change
      * the priority of host irqs. */
@@ -684,19 +684,21 @@ int gic_events_need_delivery(void)
      * ordered by priority */
     list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= mask_priority )
-            goto out;
-        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= active_priority )
-            goto out;
-        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
+        vgic_irq_lock(p, flags);
+        if ( GIC_PRI_TO_GUEST(p->cur_priority) < mask_priority &&
+             GIC_PRI_TO_GUEST(p->cur_priority) < active_priority &&
+             !test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
         {
-            rc = 1;
-            goto out;
+            vgic_irq_unlock(p, flags);
+            continue;
         }
+
+        rc = test_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+        vgic_irq_unlock(p, flags);
+        break;
     }
 
-out:
-    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
     return rc;
 }
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() with pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (10 preceding siblings ...)
  2017-07-21 19:59 ` [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() " Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-15 11:17   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper Andre Przywara
                   ` (9 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

When we return from a domain with the active bit set in an LR,
we update our pending_irq accordingly. This touches multiple status
bits, so requires the pending_irq lock.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 9637682..84b282b 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -508,6 +508,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
 
     if ( lr_val.state & GICH_LR_ACTIVE )
     {
+        vgic_irq_lock(p, flags);
         set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) )
@@ -521,6 +522,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
                 gdprintk(XENLOG_WARNING, "unable to inject hw irq=%d into d%dv%d: already active in LR%d\n",
                          irq, v->domain->domain_id, v->vcpu_id, i);
         }
+        vgic_irq_unlock(p, flags);
     }
     else if ( lr_val.state & GICH_LR_PENDING )
     {
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (11 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() " Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-15 12:31   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq Andre Przywara
                   ` (8 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

For LPIs we stored the priority value in struct pending_irq, but all
other type of IRQs were using the irq_rank structure for that.
Now that every IRQ using pending_irq, we can remove the special handling
we had in place for LPIs and just use the now unified access wrappers.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c     |  7 -------
 xen/arch/arm/vgic-v3.c     | 11 -----------
 xen/include/asm-arm/vgic.h |  1 -
 3 files changed, 19 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index ed7ff3b..a3fd500 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -690,18 +690,11 @@ static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
     BUG();
 }
 
-static int vgic_v2_lpi_get_priority(struct domain *d, unsigned int vlpi)
-{
-    /* Dummy function, no LPIs on a VGICv2. */
-    BUG();
-}
-
 static const struct vgic_ops vgic_v2_ops = {
     .vcpu_init   = vgic_v2_vcpu_init,
     .domain_init = vgic_v2_domain_init,
     .domain_free = vgic_v2_domain_free,
     .lpi_to_pending = vgic_v2_lpi_to_pending,
-    .lpi_get_priority = vgic_v2_lpi_get_priority,
     .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index e58e77e..d3356ae 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1757,23 +1757,12 @@ static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
     return pirq;
 }
 
-/* Retrieve the priority of an LPI from its struct pending_irq. */
-static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
-{
-    struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
-
-    ASSERT(p);
-
-    return p->priority;
-}
-
 static const struct vgic_ops v3_ops = {
     .vcpu_init   = vgic_v3_vcpu_init,
     .domain_init = vgic_v3_domain_init,
     .domain_free = vgic_v3_domain_free,
     .emulate_reg  = vgic_v3_emulate_reg,
     .lpi_to_pending = vgic_v3_lpi_to_pending,
-    .lpi_get_priority = vgic_v3_lpi_get_priority,
     /*
      * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
      * that can be supported is up to 4096(==256*16) in theory.
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 59d52c6..6343c95 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -143,7 +143,6 @@ struct vgic_ops {
     bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
     /* lookup the struct pending_irq for a given LPI interrupt */
     struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
-    int (*lpi_get_priority)(struct domain *d, uint32_t vlpi);
     /* Maximum number of vCPU supported */
     const unsigned int max_vcpus;
 };
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (12 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-16 11:13   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 15/22] ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq Andre Przywara
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

The IRQ configuration (level or edge triggered) for a group of IRQs
are still stored in the irq_rank structure.
Introduce a new bit called GIC_IRQ_GUEST_LEVEL in the "status" field,
which holds that information.
Remove the storage from the irq_rank and use the existing wrappers to
store and retrieve the configuration bit for multiple IRQs.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c     | 21 +++---------
 xen/arch/arm/vgic-v3.c     | 25 ++++----------
 xen/arch/arm/vgic.c        | 81 +++++++++++++++++++++++++++++++++-------------
 xen/include/asm-arm/vgic.h |  5 ++-
 4 files changed, 73 insertions(+), 59 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index a3fd500..0c8a598 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -278,20 +278,12 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
         goto read_reserved;
 
     case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
-    {
-        uint32_t icfgr;
-
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 2, gicd_reg - GICD_ICFGR, DABT_WORD);
-        if ( rank == NULL) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        icfgr = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR, DABT_WORD)];
-        vgic_unlock_rank(v, rank, flags);
 
-        *r = vreg_reg32_extract(icfgr, info);
+        irq = (gicd_reg - GICD_ICFGR) * 4;
+        *r = vgic_fetch_irq_config(v, irq);
 
         return 1;
-    }
 
     case VRANGE32(0xD00, 0xDFC):
         goto read_impl_defined;
@@ -529,13 +521,8 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
 
     case VRANGE32(GICD_ICFGR2, GICD_ICFGRN): /* SPIs */
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 2, gicd_reg - GICD_ICFGR, DABT_WORD);
-        if ( rank == NULL) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        vreg_reg32_update(&rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR,
-                                                     DABT_WORD)],
-                          r, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (gicd_reg - GICD_ICFGR) * 4; /* 2 bit per IRQ */
+        vgic_store_irq_config(v, irq, r);
         return 1;
 
     case VRANGE32(0xD00, 0xDFC):
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index d3356ae..e9e36eb 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -722,20 +722,11 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
         return 1;
 
     case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
-    {
-        uint32_t icfgr;
-
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 2, reg - GICD_ICFGR, DABT_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        icfgr = rank->icfg[REG_RANK_INDEX(2, reg - GICD_ICFGR, DABT_WORD)];
-        vgic_unlock_rank(v, rank, flags);
-
-        *r = vreg_reg32_extract(icfgr, info);
-
+        irq = (reg - GICD_ICFGR) * 4;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_config(v, irq);
         return 1;
-    }
 
     default:
         printk(XENLOG_G_ERR
@@ -834,13 +825,9 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
         /* ICFGR1 for PPI's, which is implementation defined
            if ICFGR1 is programmable or not. We chose to program */
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 2, reg - GICD_ICFGR, DABT_WORD);
-        if ( rank == NULL ) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        vreg_reg32_update(&rank->icfg[REG_RANK_INDEX(2, reg - GICD_ICFGR,
-                                                     DABT_WORD)],
-                          r, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (reg - GICD_ICFGR) * 4;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_config(v, irq, r);
         return 1;
 
     default:
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index ddcd99b..e5a4765 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -268,6 +268,55 @@ void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
     local_irq_restore(flags);
 }
 
+#define IRQS_PER_CFGR   16
+/**
+ * vgic_fetch_irq_config: assemble the configuration bits for a group of 16 IRQs
+ * @v: the VCPU for private IRQs, any VCPU of a domain for SPIs
+ * @first_irq: the first IRQ to be queried, must be aligned to 16
+ */
+uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq)
+{
+    struct pending_irq *pirqs[IRQS_PER_CFGR];
+    unsigned long flags;
+    uint32_t ret = 0, i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, IRQS_PER_CFGR, first_irq, pirqs);
+
+    for ( i = 0; i < IRQS_PER_CFGR; i++ )
+        if ( test_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status) )
+            ret |= 1 << (i * 2);
+        else
+            ret |= 3 << (i * 2);
+
+    vgic_unlock_irqs(pirqs, IRQS_PER_CFGR);
+    local_irq_restore(flags);
+
+    return ret;
+}
+
+void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
+                           uint32_t value)
+{
+    struct pending_irq *pirqs[IRQS_PER_CFGR];
+    unsigned long flags;
+    unsigned int i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, IRQS_PER_CFGR, first_irq, pirqs);
+
+    for ( i = 0; i < IRQS_PER_CFGR; i++, value >>= 2 )
+    {
+        if ( (value & 0x3) > 1 )
+            clear_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status);
+        else
+            set_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status);
+    }
+
+    vgic_unlock_irqs(pirqs, IRQS_PER_CFGR);
+    local_irq_restore(flags);
+}
+
 bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
 {
     unsigned long flags;
@@ -384,22 +433,6 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     }
 }
 
-#define VGIC_ICFG_MASK(intr) (1 << ((2 * ((intr) % 16)) + 1))
-
-/* The function should be called with the rank lock taken */
-static inline unsigned int vgic_get_virq_type(struct vcpu *v, int n, int index)
-{
-    struct vgic_irq_rank *r = vgic_get_rank(v, n);
-    uint32_t tr = r->icfg[index >> 4];
-
-    ASSERT(spin_is_locked(&r->lock));
-
-    if ( tr & VGIC_ICFG_MASK(index) )
-        return IRQ_TYPE_EDGE_RISING;
-    else
-        return IRQ_TYPE_LEVEL_HIGH;
-}
-
 void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs,
                     unsigned int first_irq, struct pending_irq **pirqs)
 {
@@ -424,8 +457,8 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 {
     const unsigned long mask = r;
     struct pending_irq *p;
-    unsigned int irq;
-    unsigned long flags;
+    unsigned int irq, int_type;
+    unsigned long flags, vcpu_flags;
     int i = 0;
     struct vcpu *v_target;
     struct domain *d = v->domain;
@@ -436,23 +469,27 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         v_target = vgic_get_target_vcpu(v, irq);
-        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
+        spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);
         p = irq_to_pending(v_target, irq);
+        vgic_irq_lock(p, flags);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+        int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ?
+                            IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING;
         if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v_target, irq, p->cur_priority);
-        spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
+        vgic_irq_unlock(p, flags);
+        spin_unlock_irqrestore(&v_target->arch.vgic.lock, vcpu_flags);
         if ( p->desc != NULL )
         {
-            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
             spin_lock_irqsave(&p->desc->lock, flags);
+            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
             /*
              * The irq cannot be a PPI, we only support delivery of SPIs
              * to guests.
              */
             ASSERT(irq >= 32);
             if ( irq_type_set_by_domain(d) )
-                gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
+                gic_set_irq_type(p->desc, int_type);
             p->desc->handler->enable(p->desc);
             spin_unlock_irqrestore(&p->desc->lock, flags);
         }
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 6343c95..14c22b2 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -73,6 +73,7 @@ struct pending_irq
 #define GIC_IRQ_GUEST_ENABLED  3
 #define GIC_IRQ_GUEST_MIGRATING   4
 #define GIC_IRQ_GUEST_PRISTINE_LPI  5
+#define GIC_IRQ_GUEST_LEVEL    6
     unsigned long status;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     unsigned int irq;
@@ -110,7 +111,6 @@ struct vgic_irq_rank {
     uint8_t index;
 
     uint32_t ienable;
-    uint32_t icfg[2];
 
     /*
      * It's more convenient to store a target VCPU per vIRQ
@@ -191,6 +191,9 @@ uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
                                  unsigned int first_irq);
 void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
                              unsigned int first_irq, uint32_t reg);
+uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq);
+void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
+                           uint32_t reg);
 
 enum gic_sgi_mode;
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 15/22] ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (13 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-07-21 20:00 ` [RFC PATCH v2 16/22] ARM: vITS: rename lpi_vcpu_id to vcpu_id Andre Przywara
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

For now vgic_get_target_vcpu takes a VCPU and an IRQ number, because
this is what we need for finding the proper rank and the VCPU in there.
In the future the VCPU will be looked up in the struct pending_irq.
To avoid locking issues, let's pass the pointer to the pending_irq
instead. We can read the IRQ number from there, and all but one caller
know that pointer already anyway.
This simplifies future code changes.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c         |  2 +-
 xen/arch/arm/vgic.c        | 22 ++++++++++++----------
 xen/include/asm-arm/vgic.h |  2 +-
 3 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 84b282b..38e998a 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -559,7 +559,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
             smp_wmb();
             if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
             {
-                struct vcpu *v_target = vgic_get_target_vcpu(v, irq);
+                struct vcpu *v_target = vgic_get_target_vcpu(v, p);
                 irq_set_affinity(p->desc, cpumask_of(v_target->processor));
                 clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
             }
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index e5a4765..6722924 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -224,10 +224,11 @@ int vcpu_vgic_free(struct vcpu *v)
     return 0;
 }
 
-struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
+struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
 {
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
-    int target = read_atomic(&rank->vcpu[virq & INTERRUPT_RANK_MASK]);
+    struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
+    int target = read_atomic(&rank->vcpu[p->irq & INTERRUPT_RANK_MASK]);
+
     return v->domain->vcpu[target];
 }
 
@@ -391,8 +392,8 @@ void arch_move_irqs(struct vcpu *v)
 
     for ( i = 32; i < vgic_num_irqs(d); i++ )
     {
-        v_target = vgic_get_target_vcpu(v, i);
-        p = irq_to_pending(v_target, i);
+        p = irq_to_pending(v, i);
+        v_target = vgic_get_target_vcpu(v, p);
 
         if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
             irq_set_affinity(p->desc, cpu_mask);
@@ -414,10 +415,10 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        v_target = vgic_get_target_vcpu(v, irq);
+        p = irq_to_pending(v, irq);
+        v_target = vgic_get_target_vcpu(v, p);
 
         spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
-        p = irq_to_pending(v_target, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_lr_pending(v_target, p);
         desc = p->desc;
@@ -468,9 +469,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        v_target = vgic_get_target_vcpu(v, irq);
+        p = irq_to_pending(v, irq);
+        v_target = vgic_get_target_vcpu(v, p);
         spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);
-        p = irq_to_pending(v_target, irq);
         vgic_irq_lock(p, flags);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ?
@@ -666,12 +667,13 @@ out:
 
 void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq)
 {
+    struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
     struct vcpu *v;
 
     /* the IRQ needs to be an SPI */
     ASSERT(virq >= 32 && virq <= vgic_num_irqs(d));
 
-    v = vgic_get_target_vcpu(d->vcpu[0], virq);
+    v = vgic_get_target_vcpu(d->vcpu[0], p);
     vgic_vcpu_inject_irq(v, virq);
 }
 
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 14c22b2..7c6067d 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -213,7 +213,7 @@ enum gic_sgi_mode;
 extern int domain_vgic_init(struct domain *d, unsigned int nr_spis);
 extern void domain_vgic_free(struct domain *d);
 extern int vcpu_vgic_init(struct vcpu *v);
-extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq);
+extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 16/22] ARM: vITS: rename lpi_vcpu_id to vcpu_id
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (14 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 15/22] ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-07-21 20:00 ` [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq() Andre Przywara
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Since we will soon store a virtual IRQ's target VCPU in struct pending_irq,
generalise the existing storage for an LPI's target to cover all IRQs.
This just renames "lpi_vcpu_id" to "vcpu_id", but doesn't change anything
else yet.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-lpi.c  | 2 +-
 xen/arch/arm/vgic-v3-its.c | 7 +++----
 xen/arch/arm/vgic.c        | 6 +++---
 xen/include/asm-arm/vgic.h | 2 +-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index c3474f5..2306b58 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -149,7 +149,7 @@ void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq)
     if ( !p )
         return;
 
-    vcpu_id = ACCESS_ONCE(p->lpi_vcpu_id);
+    vcpu_id = ACCESS_ONCE(p->vcpu_id);
     if ( vcpu_id >= d->max_vcpus )
           return;
 
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 705708a..682ce10 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -560,7 +560,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
         {
             vgic_irq_lock(pirqs[i], flags);
             /* We only care about LPIs on our VCPU. */
-            if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
+            if ( pirqs[i]->vcpu_id != vcpu->vcpu_id )
             {
                 vgic_irq_unlock(pirqs[i], flags);
                 continue;
@@ -781,7 +781,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
     if ( ret )
         goto out_remove_host_entry;
 
-    pirq->lpi_vcpu_id = vcpu->vcpu_id;
+    pirq->vcpu_id = vcpu->vcpu_id;
     /*
      * Mark this LPI as new, so any older (now unmapped) LPI in any LR
      * can be easily recognised as such.
@@ -852,8 +852,7 @@ static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
      */
     spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
 
-    /* Update our cached vcpu_id in the pending_irq. */
-    p->lpi_vcpu_id = nvcpu->vcpu_id;
+    p->vcpu_id = nvcpu->vcpu_id;
 
     spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 6722924..1ba0010 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -63,15 +63,15 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
 
 void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
-    /* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
-    BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
+    /* The vcpu_id field must be big enough to hold a VCPU ID. */
+    BUILD_BUG_ON(BIT(sizeof(p->vcpu_id) * 8) < MAX_VIRT_CPUS);
 
     memset(p, 0, sizeof(*p));
     INIT_LIST_HEAD(&p->inflight);
     INIT_LIST_HEAD(&p->lr_queue);
     spin_lock_init(&p->lock);
     p->irq = virq;
-    p->lpi_vcpu_id = INVALID_VCPU_ID;
+    p->vcpu_id = INVALID_VCPU_ID;
 }
 
 static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 7c6067d..ffd9a95 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -81,7 +81,7 @@ struct pending_irq
     uint8_t lr;
     uint8_t cur_priority;       /* Holds the priority of an injected IRQ. */
     uint8_t priority;           /* Holds the priority for any new IRQ. */
-    uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
+    uint8_t vcpu_id;            /* The VCPU target for any new IRQ. */
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
     struct list_head inflight;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq()
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (15 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 16/22] ARM: vITS: rename lpi_vcpu_id to vcpu_id Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-16 11:23   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq Andre Przywara
                   ` (4 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Since a VCPU can own multiple IRQs, the natural locking order is to take
a VCPU lock first, then the individual per-IRQ locks.
However there are situations where the target VCPU is not known without
looking into the struct pending_irq first, which usually means we need to
take the IRQ lock first.
To solve this problem, we provide a function called vgic_lock_vcpu_irq(),
which takes a locked struct pending_irq() and returns with *both* the
VCPU and the IRQ lock held.
This is done by looking up the target VCPU, then briefly dropping the
IRQ lock, taking the VCPU lock, then grabbing the per-IRQ lock again.
Before returning there is a check whether something has changed in the
brief period where we didn't hold the IRQ lock, retrying in this (very
rare) case.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1ba0010..0e6dfe5 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -224,6 +224,48 @@ int vcpu_vgic_free(struct vcpu *v)
     return 0;
 }
 
+/**
+ * vgic_lock_vcpu_irq(): lock both the pending_irq and the corresponding VCPU
+ *
+ * @v: the VCPU (for private IRQs)
+ * @p: pointer to the locked struct pending_irq
+ * @flags: pointer to the IRQ flags used when locking the VCPU
+ *
+ * The function takes a locked IRQ and returns with both the IRQ and the
+ * corresponding VCPU locked. This is non-trivial due to the locking order
+ * being actually the other way round (VCPU first, then IRQ).
+ *
+ * Returns: pointer to the VCPU this IRQ is targeting.
+ */
+struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
+                                unsigned long *flags)
+{
+    struct vcpu *target_vcpu;
+
+    ASSERT(spin_is_locked(&p->lock));
+
+    target_vcpu = vgic_get_target_vcpu(v, p);
+    spin_unlock(&p->lock);
+
+    do
+    {
+        struct vcpu *current_vcpu;
+
+        spin_lock_irqsave(&target_vcpu->arch.vgic.lock, *flags);
+        spin_lock(&p->lock);
+
+        current_vcpu = vgic_get_target_vcpu(v, p);
+
+        if ( target_vcpu->vcpu_id == current_vcpu->vcpu_id )
+            return target_vcpu;
+
+        spin_unlock(&p->lock);
+        spin_unlock_irqrestore(&target_vcpu->arch.vgic.lock, *flags);
+
+        target_vcpu = current_vcpu;
+    } while (1);
+}
+
 struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
 {
     struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (16 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq() Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-16 13:40   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 19/22] ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of vcpu Andre Przywara
                   ` (3 subsequent siblings)
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

The VCPU a shared virtual IRQ is targeting is currently stored in the
irq_rank structure.
For LPIs we already store the target VCPU in struct pending_irq, so
move SPIs over as well.
The ITS code, which was using this field already, was so far using the
VCPU lock to protect the pending_irq, so move this over to the new lock.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c     | 56 +++++++++++++++--------------------
 xen/arch/arm/vgic-v3-its.c |  9 +++---
 xen/arch/arm/vgic-v3.c     | 69 ++++++++++++++++++++-----------------------
 xen/arch/arm/vgic.c        | 73 +++++++++++++++++++++-------------------------
 xen/include/asm-arm/vgic.h | 13 +++------
 5 files changed, 96 insertions(+), 124 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0c8a598..c7ed3ce 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -66,19 +66,22 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t csize,
  *
  * Note the byte offset will be aligned to an ITARGETSR<n> boundary.
  */
-static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank,
-                                     unsigned int offset)
+static uint32_t vgic_fetch_itargetsr(struct vcpu *v, unsigned int offset)
 {
     uint32_t reg = 0;
     unsigned int i;
+    unsigned long flags;
 
-    ASSERT(spin_is_locked(&rank->lock));
-
-    offset &= INTERRUPT_RANK_MASK;
     offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
 
     for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ )
-        reg |= (1 << read_atomic(&rank->vcpu[offset])) << (i * NR_BITS_PER_TARGET);
+    {
+        struct pending_irq *p = irq_to_pending(v, offset);
+
+        vgic_irq_lock(p, flags);
+        reg |= (1 << p->vcpu_id) << (i * NR_BITS_PER_TARGET);
+        vgic_irq_unlock(p, flags);
+    }
 
     return reg;
 }
@@ -89,32 +92,29 @@ static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank,
  *
  * Note the byte offset will be aligned to an ITARGETSR<n> boundary.
  */
-static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank,
+static void vgic_store_itargetsr(struct domain *d,
                                  unsigned int offset, uint32_t itargetsr)
 {
     unsigned int i;
     unsigned int virq;
 
-    ASSERT(spin_is_locked(&rank->lock));
-
     /*
      * The ITARGETSR0-7, used for SGIs/PPIs, are implemented RO in the
      * emulation and should never call this function.
      *
-     * They all live in the first rank.
+     * They all live in the first four bytes of ITARGETSR.
      */
-    BUILD_BUG_ON(NR_INTERRUPT_PER_RANK != 32);
-    ASSERT(rank->index >= 1);
+    ASSERT(offset >= 4);
 
-    offset &= INTERRUPT_RANK_MASK;
+    virq = offset;
     offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
 
-    virq = rank->index * NR_INTERRUPT_PER_RANK + offset;
-
     for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++, virq++ )
     {
         unsigned int new_target, old_target;
+        unsigned long flags;
         uint8_t new_mask;
+        struct pending_irq *p = spi_to_pending(d, virq);
 
         /*
          * Don't need to mask as we rely on new_mask to fit for only one
@@ -151,16 +151,14 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank,
         /* The vCPU ID always starts from 0 */
         new_target--;
 
-        old_target = read_atomic(&rank->vcpu[offset]);
+        vgic_irq_lock(p, flags);
+        old_target = p->vcpu_id;
 
         /* Only migrate the vIRQ if the target vCPU has changed */
         if ( new_target != old_target )
-        {
-            if ( vgic_migrate_irq(d->vcpu[old_target],
-                             d->vcpu[new_target],
-                             virq) )
-                write_atomic(&rank->vcpu[offset], new_target);
-        }
+            vgic_migrate_irq(p, &flags, d->vcpu[new_target]);
+        else
+            vgic_irq_unlock(p, flags);
     }
 }
 
@@ -264,11 +262,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
         uint32_t itargetsr;
 
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
-        if ( rank == NULL) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        itargetsr = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
-        vgic_unlock_rank(v, rank, flags);
+        itargetsr = vgic_fetch_itargetsr(v, gicd_reg - GICD_ITARGETSR);
         *r = vreg_reg32_extract(itargetsr, info);
 
         return 1;
@@ -498,14 +492,10 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
         uint32_t itargetsr;
 
         if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
-        if ( rank == NULL) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        itargetsr = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
+        itargetsr = vgic_fetch_itargetsr(v, gicd_reg - GICD_ITARGETSR);
         vreg_reg32_update(&itargetsr, r, info);
-        vgic_store_itargetsr(v->domain, rank, gicd_reg - GICD_ITARGETSR,
+        vgic_store_itargetsr(v->domain, gicd_reg - GICD_ITARGETSR,
                              itargetsr);
-        vgic_unlock_rank(v, rank, flags);
         return 1;
     }
 
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 682ce10..1020ebe 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -628,7 +628,7 @@ static int its_discard_event(struct virt_its *its,
 
     /* Cleanup the pending_irq and disconnect it from the LPI. */
     gic_remove_irq_from_queues(vcpu, p);
-    vgic_init_pending_irq(p, INVALID_LPI);
+    vgic_init_pending_irq(p, INVALID_LPI, INVALID_VCPU_ID);
 
     spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
 
@@ -768,7 +768,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
     if ( !pirq )
         goto out_remove_mapping;
 
-    vgic_init_pending_irq(pirq, intid);
+    vgic_init_pending_irq(pirq, intid, vcpu->vcpu_id);
 
     /*
      * Now read the guest's property table to initialize our cached state.
@@ -781,7 +781,6 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
     if ( ret )
         goto out_remove_host_entry;
 
-    pirq->vcpu_id = vcpu->vcpu_id;
     /*
      * Mark this LPI as new, so any older (now unmapped) LPI in any LR
      * can be easily recognised as such.
@@ -852,9 +851,9 @@ static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
      */
     spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
 
+    vgic_irq_lock(p, flags);
     p->vcpu_id = nvcpu->vcpu_id;
-
-    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
+    vgic_irq_unlock(p, flags);
 
     /*
      * TODO: Investigate if and how to migrate an already pending LPI. This
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index e9e36eb..e9d46af 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -100,18 +100,21 @@ static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
  *
  * Note the byte offset will be aligned to an IROUTER<n> boundary.
  */
-static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank,
-                                   unsigned int offset)
+static uint64_t vgic_fetch_irouter(struct vcpu *v, unsigned int offset)
 {
-    ASSERT(spin_is_locked(&rank->lock));
+    struct pending_irq *p;
+    unsigned long flags;
+    uint64_t aff;
 
     /* There is exactly 1 vIRQ per IROUTER */
     offset /= NR_BYTES_PER_IROUTER;
 
-    /* Get the index in the rank */
-    offset &= INTERRUPT_RANK_MASK;
+    p = irq_to_pending(v, offset);
+    vgic_irq_lock(p, flags);
+    aff = vcpuid_to_vaffinity(p->vcpu_id);
+    vgic_irq_unlock(p, flags);
 
-    return vcpuid_to_vaffinity(read_atomic(&rank->vcpu[offset]));
+    return aff;
 }
 
 /*
@@ -120,10 +123,12 @@ static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank,
  *
  * Note the offset will be aligned to the appropriate boundary.
  */
-static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
+static void vgic_store_irouter(struct domain *d,
                                unsigned int offset, uint64_t irouter)
 {
-    struct vcpu *new_vcpu, *old_vcpu;
+    struct vcpu *new_vcpu;
+    struct pending_irq *p;
+    unsigned long flags;
     unsigned int virq;
 
     /* There is 1 vIRQ per IROUTER */
@@ -135,11 +140,10 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
      */
     ASSERT(virq >= 32);
 
-    /* Get the index in the rank */
-    offset &= virq & INTERRUPT_RANK_MASK;
+    p = spi_to_pending(d, virq);
+    vgic_irq_lock(p, flags);
 
     new_vcpu = vgic_v3_irouter_to_vcpu(d, irouter);
-    old_vcpu = d->vcpu[read_atomic(&rank->vcpu[offset])];
 
     /*
      * From the spec (see 8.9.13 in IHI 0069A), any write with an
@@ -149,16 +153,13 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
      * invalid vCPU. So for now, just ignore the write.
      *
      * TODO: Respect the spec
+     *
+     * Only migrate the IRQ if the target vCPU has changed
      */
-    if ( !new_vcpu )
-        return;
-
-    /* Only migrate the IRQ if the target vCPU has changed */
-    if ( new_vcpu != old_vcpu )
-    {
-        if ( vgic_migrate_irq(old_vcpu, new_vcpu, virq) )
-            write_atomic(&rank->vcpu[offset], new_vcpu->vcpu_id);
-    }
+    if ( new_vcpu && new_vcpu->vcpu_id != p->vcpu_id )
+        vgic_migrate_irq(p, &flags, new_vcpu);
+    else
+        vgic_irq_unlock(p, flags);
 }
 
 static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
@@ -1061,8 +1062,6 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
                                    register_t *r, void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
-    unsigned long flags;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
 
     perfc_incr(vgicd_reads);
@@ -1190,15 +1189,12 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
     case VRANGE64(GICD_IROUTER32, GICD_IROUTER1019):
     {
         uint64_t irouter;
+        unsigned int irq;
 
         if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-        rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
-                                DABT_DOUBLE_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        irouter = vgic_fetch_irouter(rank, gicd_reg - GICD_IROUTER);
-        vgic_unlock_rank(v, rank, flags);
-
+        irq = (gicd_reg - GICD_IROUTER) / 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        irouter = vgic_fetch_irouter(v, gicd_reg - GICD_IROUTER);
         *r = vreg_reg64_extract(irouter, info);
 
         return 1;
@@ -1264,8 +1260,6 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
                                     register_t r, void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
-    unsigned long flags;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
 
     perfc_incr(vgicd_writes);
@@ -1379,16 +1373,15 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
     case VRANGE64(GICD_IROUTER32, GICD_IROUTER1019):
     {
         uint64_t irouter;
+        unsigned int irq;
 
         if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-        rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
-                                DABT_DOUBLE_WORD);
-        if ( rank == NULL ) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        irouter = vgic_fetch_irouter(rank, gicd_reg - GICD_IROUTER);
+        irq = (gicd_reg - GICD_IROUTER) / 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+
+        irouter = vgic_fetch_irouter(v, gicd_reg - GICD_IROUTER);
         vreg_reg64_update(&irouter, r, info);
-        vgic_store_irouter(v->domain, rank, gicd_reg - GICD_IROUTER, irouter);
-        vgic_unlock_rank(v, rank, flags);
+        vgic_store_irouter(v->domain, gicd_reg - GICD_IROUTER, irouter);
         return 1;
     }
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 0e6dfe5..f6532ee 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -61,7 +61,8 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
     return vgic_get_rank(v, rank);
 }
 
-void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
+void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
+                           unsigned int vcpu_id)
 {
     /* The vcpu_id field must be big enough to hold a VCPU ID. */
     BUILD_BUG_ON(BIT(sizeof(p->vcpu_id) * 8) < MAX_VIRT_CPUS);
@@ -71,27 +72,15 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
     INIT_LIST_HEAD(&p->lr_queue);
     spin_lock_init(&p->lock);
     p->irq = virq;
-    p->vcpu_id = INVALID_VCPU_ID;
+    p->vcpu_id = vcpu_id;
 }
 
 static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
                            unsigned int vcpu)
 {
-    unsigned int i;
-
-    /*
-     * Make sure that the type chosen to store the target is able to
-     * store an VCPU ID between 0 and the maximum of virtual CPUs
-     * supported.
-     */
-    BUILD_BUG_ON((1 << (sizeof(rank->vcpu[0]) * 8)) < MAX_VIRT_CPUS);
-
     spin_lock_init(&rank->lock);
 
     rank->index = index;
-
-    for ( i = 0; i < NR_INTERRUPT_PER_RANK; i++ )
-        write_atomic(&rank->vcpu[i], vcpu);
 }
 
 int domain_vgic_register(struct domain *d, int *mmio_count)
@@ -142,9 +131,9 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
     if ( d->arch.vgic.pending_irqs == NULL )
         return -ENOMEM;
 
+    /* SPIs are routed to VCPU0 by default */
     for (i=0; i<d->arch.vgic.nr_spis; i++)
-        vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32);
-
+        vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32, 0);
     /* SPIs are routed to VCPU0 by default */
     for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
         vgic_rank_init(&d->arch.vgic.shared_irqs[i], i + 1, 0);
@@ -208,8 +197,9 @@ int vcpu_vgic_init(struct vcpu *v)
     v->domain->arch.vgic.handler->vcpu_init(v);
 
     memset(&v->arch.vgic.pending_irqs, 0, sizeof(v->arch.vgic.pending_irqs));
+    /* SGIs/PPIs are always routed to this VCPU */
     for (i = 0; i < 32; i++)
-        vgic_init_pending_irq(&v->arch.vgic.pending_irqs[i], i);
+        vgic_init_pending_irq(&v->arch.vgic.pending_irqs[i], i, v->vcpu_id);
 
     INIT_LIST_HEAD(&v->arch.vgic.inflight_irqs);
     INIT_LIST_HEAD(&v->arch.vgic.lr_pending);
@@ -268,10 +258,7 @@ struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
 
 struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
 {
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
-    int target = read_atomic(&rank->vcpu[p->irq & INTERRUPT_RANK_MASK]);
-
-    return v->domain->vcpu[target];
+    return v->domain->vcpu[p->vcpu_id];
 }
 
 #define MAX_IRQS_PER_IPRIORITYR 4
@@ -360,57 +347,65 @@ void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
     local_irq_restore(flags);
 }
 
-bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
+bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags,
+                      struct vcpu *new)
 {
-    unsigned long flags;
-    struct pending_irq *p;
+    unsigned long vcpu_flags;
+    struct vcpu *old;
+    bool ret = false;
 
     /* This will never be called for an LPI, as we don't migrate them. */
-    ASSERT(!is_lpi(irq));
+    ASSERT(!is_lpi(p->irq));
 
-    spin_lock_irqsave(&old->arch.vgic.lock, flags);
-
-    p = irq_to_pending(old, irq);
+    ASSERT(spin_is_locked(&p->lock));
 
     /* nothing to do for virtual interrupts */
     if ( p->desc == NULL )
     {
-        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
-        return true;
+        ret = true;
+        goto out_unlock;
     }
 
     /* migration already in progress, no need to do anything */
     if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
     {
-        gprintk(XENLOG_WARNING, "irq %u migration failed: requested while in progress\n", irq);
-        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
-        return false;
+        gprintk(XENLOG_WARNING, "irq %u migration failed: requested while in progress\n", p->irq);
+        goto out_unlock;
     }
 
+    p->vcpu_id = new->vcpu_id;
+
     perfc_incr(vgic_irq_migrates);
 
     if ( list_empty(&p->inflight) )
     {
         irq_set_affinity(p->desc, cpumask_of(new->processor));
-        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
-        return true;
+        goto out_unlock;
     }
+
     /* If the IRQ is still lr_pending, re-inject it to the new vcpu */
     if ( !list_empty(&p->lr_queue) )
     {
+        old = vgic_lock_vcpu_irq(new, p, &vcpu_flags);
         gic_remove_irq_from_queues(old, p);
         irq_set_affinity(p->desc, cpumask_of(new->processor));
-        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
-        vgic_vcpu_inject_irq(new, irq);
+
+        vgic_irq_unlock(p, *flags);
+        spin_unlock_irqrestore(&old->arch.vgic.lock, vcpu_flags);
+
+        vgic_vcpu_inject_irq(new, p->irq);
         return true;
     }
+
     /* if the IRQ is in a GICH_LR register, set GIC_IRQ_GUEST_MIGRATING
      * and wait for the EOI */
     if ( !list_empty(&p->inflight) )
         set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
 
-    spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
-    return true;
+out_unlock:
+    vgic_irq_unlock(p, *flags);
+
+    return false;
 }
 
 void arch_move_irqs(struct vcpu *v)
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index ffd9a95..4b47a9b 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -112,13 +112,6 @@ struct vgic_irq_rank {
 
     uint32_t ienable;
 
-    /*
-     * It's more convenient to store a target VCPU per vIRQ
-     * than the register ITARGETSR/IROUTER itself.
-     * Use atomic operations to read/write the vcpu fields to avoid
-     * taking the rank lock.
-     */
-    uint8_t vcpu[32];
 };
 
 struct sgi_target {
@@ -217,7 +210,8 @@ extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
-extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq);
+extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
+                                  unsigned int vcpu_id);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
 extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
@@ -237,7 +231,8 @@ extern int vcpu_vgic_free(struct vcpu *v);
 extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
                         enum gic_sgi_mode irqmode, int virq,
                         const struct sgi_target *target);
-extern bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq);
+extern bool vgic_migrate_irq(struct pending_irq *p,
+                             unsigned long *flags, struct vcpu *new);
 
 /* Reserve a specific guest vIRQ */
 extern bool vgic_reserve_virq(struct domain *d, unsigned int virq);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 19/22] ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of vcpu
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (17 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-07-21 20:00 ` [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq Andre Przywara
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

For "historical" reasons we used to pass a vCPU pointer to
vgic_get_target_vcpu(), which was only considered to distinguish private
IRQs. Now since we have the unique pending_irq pointer already, we don't
need the vCPU anymore, but just the domain.
So change this function to avoid a rather hackish "d->vcpu[0]" parameter
when looking up SPIs, also allows our new vgic_lock_vcpu_irq() function
to eventually take a domain parameter (which makes more sense).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c         |  2 +-
 xen/arch/arm/vgic.c        | 22 +++++++++++-----------
 xen/include/asm-arm/vgic.h |  3 ++-
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 38e998a..300ce6c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -559,7 +559,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
             smp_wmb();
             if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
             {
-                struct vcpu *v_target = vgic_get_target_vcpu(v, p);
+                struct vcpu *v_target = vgic_get_target_vcpu(v->domain, p);
                 irq_set_affinity(p->desc, cpumask_of(v_target->processor));
                 clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
             }
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index f6532ee..a49fcde 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -217,7 +217,7 @@ int vcpu_vgic_free(struct vcpu *v)
 /**
  * vgic_lock_vcpu_irq(): lock both the pending_irq and the corresponding VCPU
  *
- * @v: the VCPU (for private IRQs)
+ * @d: the domain the IRQ belongs to
  * @p: pointer to the locked struct pending_irq
  * @flags: pointer to the IRQ flags used when locking the VCPU
  *
@@ -227,14 +227,14 @@ int vcpu_vgic_free(struct vcpu *v)
  *
  * Returns: pointer to the VCPU this IRQ is targeting.
  */
-struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
+struct vcpu *vgic_lock_vcpu_irq(struct domain *d, struct pending_irq *p,
                                 unsigned long *flags)
 {
     struct vcpu *target_vcpu;
 
     ASSERT(spin_is_locked(&p->lock));
 
-    target_vcpu = vgic_get_target_vcpu(v, p);
+    target_vcpu = vgic_get_target_vcpu(d, p);
     spin_unlock(&p->lock);
 
     do
@@ -244,7 +244,7 @@ struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
         spin_lock_irqsave(&target_vcpu->arch.vgic.lock, *flags);
         spin_lock(&p->lock);
 
-        current_vcpu = vgic_get_target_vcpu(v, p);
+        current_vcpu = vgic_get_target_vcpu(d, p);
 
         if ( target_vcpu->vcpu_id == current_vcpu->vcpu_id )
             return target_vcpu;
@@ -256,9 +256,9 @@ struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
     } while (1);
 }
 
-struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
+struct vcpu *vgic_get_target_vcpu(struct domain *d, struct pending_irq *p)
 {
-    return v->domain->vcpu[p->vcpu_id];
+    return d->vcpu[p->vcpu_id];
 }
 
 #define MAX_IRQS_PER_IPRIORITYR 4
@@ -386,7 +386,7 @@ bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags,
     /* If the IRQ is still lr_pending, re-inject it to the new vcpu */
     if ( !list_empty(&p->lr_queue) )
     {
-        old = vgic_lock_vcpu_irq(new, p, &vcpu_flags);
+        old = vgic_lock_vcpu_irq(new->domain, p, &vcpu_flags);
         gic_remove_irq_from_queues(old, p);
         irq_set_affinity(p->desc, cpumask_of(new->processor));
 
@@ -430,7 +430,7 @@ void arch_move_irqs(struct vcpu *v)
     for ( i = 32; i < vgic_num_irqs(d); i++ )
     {
         p = irq_to_pending(v, i);
-        v_target = vgic_get_target_vcpu(v, p);
+        v_target = vgic_get_target_vcpu(d, p);
 
         if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
             irq_set_affinity(p->desc, cpu_mask);
@@ -453,7 +453,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
-        v_target = vgic_get_target_vcpu(v, p);
+        v_target = vgic_get_target_vcpu(v->domain, p);
 
         spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -507,7 +507,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
-        v_target = vgic_get_target_vcpu(v, p);
+        v_target = vgic_get_target_vcpu(v->domain, p);
         spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);
         vgic_irq_lock(p, flags);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -710,7 +710,7 @@ void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq)
     /* the IRQ needs to be an SPI */
     ASSERT(virq >= 32 && virq <= vgic_num_irqs(d));
 
-    v = vgic_get_target_vcpu(d->vcpu[0], p);
+    v = vgic_get_target_vcpu(d, p);
     vgic_vcpu_inject_irq(v, virq);
 }
 
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 4b47a9b..fe4d53d 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -206,7 +206,8 @@ enum gic_sgi_mode;
 extern int domain_vgic_init(struct domain *d, unsigned int nr_spis);
 extern void domain_vgic_free(struct domain *d);
 extern int vcpu_vgic_init(struct vcpu *v);
-extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p);
+extern struct vcpu *vgic_get_target_vcpu(struct domain *d,
+                                         struct pending_irq *p);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (18 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 19/22] ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of vcpu Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-16 14:32   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock Andre Przywara
  2017-07-21 20:00 ` [RFC PATCH v2 22/22] ARM: vGIC: remove remaining irq_rank code Andre Przywara
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

The enabled bits for a group of IRQs are still stored in the irq_rank
structure, although we already have the same information in pending_irq,
in the GIC_IRQ_GUEST_ENABLED bit of the "status" field.
Remove the storage from the irq_rank and just utilize the existing
wrappers to cover enabling/disabling of multiple IRQs.
This also marks the removal of the last member of struct vgic_irq_rank.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c     |  41 +++------
 xen/arch/arm/vgic-v3.c     |  41 +++------
 xen/arch/arm/vgic.c        | 201 +++++++++++++++++++++++++++------------------
 xen/include/asm-arm/vgic.h |  10 +--
 4 files changed, 152 insertions(+), 141 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index c7ed3ce..3320642 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -166,9 +166,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
                                    register_t *r, void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
-    unsigned long flags;
     unsigned int irq;
 
     perfc_incr(vgicd_reads);
@@ -222,20 +220,16 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
 
     case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD);
-        if ( rank == NULL) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        *r = vreg_reg32_extract(rank->ienable, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (gicd_reg - GICD_ISENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_enabled(v, irq);
         return 1;
 
     case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD);
-        if ( rank == NULL) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        *r = vreg_reg32_extract(rank->ienable, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (gicd_reg - GICD_ICENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_enabled(v, irq);
         return 1;
 
     /* Read the pending status of an IRQ via GICD is not supported */
@@ -386,10 +380,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
                                     register_t r, void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
-    uint32_t tr;
-    unsigned long flags;
     unsigned int irq;
 
     perfc_incr(vgicd_writes);
@@ -426,24 +417,16 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
 
     case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD);
-        if ( rank == NULL) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        tr = rank->ienable;
-        vreg_reg32_setbits(&rank->ienable, r, info);
-        vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (gicd_reg - GICD_ISENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_enable(v, irq, r);
         return 1;
 
     case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD);
-        if ( rank == NULL) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        tr = rank->ienable;
-        vreg_reg32_clearbits(&rank->ienable, r, info);
-        vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (gicd_reg - GICD_ICENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_disable(v, irq, r);
         return 1;
 
     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index e9d46af..00cc1e5 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -676,8 +676,6 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
                                             register_t *r)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
-    unsigned long flags;
     unsigned int irq;
 
     switch ( reg )
@@ -689,20 +687,16 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
 
     case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        *r = vreg_reg32_extract(rank->ienable, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (reg - GICD_ISENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_enabled(v, irq);
         return 1;
 
     case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD);
-        if ( rank == NULL ) goto read_as_zero;
-        vgic_lock_rank(v, rank, flags);
-        *r = vreg_reg32_extract(rank->ienable, info);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (reg - GICD_ICENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
+        *r = vgic_fetch_irq_enabled(v, irq);
         return 1;
 
     /* Read the pending status of an IRQ via GICD/GICR is not supported */
@@ -752,9 +746,6 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
                                              register_t r)
 {
     struct hsr_dabt dabt = info->dabt;
-    struct vgic_irq_rank *rank;
-    uint32_t tr;
-    unsigned long flags;
     unsigned int irq;
 
     switch ( reg )
@@ -765,24 +756,16 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
 
     case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD);
-        if ( rank == NULL ) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        tr = rank->ienable;
-        vreg_reg32_setbits(&rank->ienable, r, info);
-        vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (reg - GICD_ISENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_enable(v, irq, r);
         return 1;
 
     case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
         if ( dabt.size != DABT_WORD ) goto bad_width;
-        rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD);
-        if ( rank == NULL ) goto write_ignore;
-        vgic_lock_rank(v, rank, flags);
-        tr = rank->ienable;
-        vreg_reg32_clearbits(&rank->ienable, r, info);
-        vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index);
-        vgic_unlock_rank(v, rank, flags);
+        irq = (reg - GICD_ICENABLER) * 8;
+        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
+        vgic_store_irq_disable(v, irq, r);
         return 1;
 
     case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index a49fcde..dd969e2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -261,6 +261,60 @@ struct vcpu *vgic_get_target_vcpu(struct domain *d, struct pending_irq *p)
     return d->vcpu[p->vcpu_id];
 }
 
+/* Takes a locked pending_irq and enables the interrupt, also unlocking it. */
+static void vgic_enable_irq_unlock(struct domain *d, struct pending_irq *p)
+{
+    struct vcpu *v_target;
+    unsigned long flags;
+    struct irq_desc *desc;
+
+    v_target = vgic_lock_vcpu_irq(d, p, &flags);
+
+    clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+    gic_remove_from_lr_pending(v_target, p);
+    desc = p->desc;
+    spin_unlock(&p->lock);
+    spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
+
+    if ( desc != NULL )
+    {
+        spin_lock_irqsave(&desc->lock, flags);
+        desc->handler->disable(desc);
+        spin_unlock_irqrestore(&desc->lock, flags);
+    }
+}
+
+/* Takes a locked pending_irq and disables the interrupt, also unlocking it. */
+static void vgic_disable_irq_unlock(struct domain *d, struct pending_irq *p)
+{
+    struct vcpu *v_target;
+    unsigned long flags;
+    struct irq_desc *desc;
+    int int_type;
+
+    v_target = vgic_lock_vcpu_irq(d, p, &flags);
+
+    set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+    int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ? IRQ_TYPE_LEVEL_HIGH :
+                                                           IRQ_TYPE_EDGE_RISING;
+    if ( !list_empty(&p->inflight) &&
+         !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        gic_raise_guest_irq(v_target, p->irq, p->cur_priority);
+    desc = p->desc;
+    spin_unlock(&p->lock);
+    spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
+
+    if ( desc != NULL )
+    {
+        spin_lock_irqsave(&desc->lock, flags);
+        irq_set_affinity(desc, cpumask_of(v_target->processor));
+        if ( irq_type_set_by_domain(d) )
+            gic_set_irq_type(desc, int_type);
+        desc->handler->enable(desc);
+        spin_unlock_irqrestore(&desc->lock, flags);
+    }
+}
+
 #define MAX_IRQS_PER_IPRIORITYR 4
 uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
                                  unsigned int first_irq)
@@ -347,6 +401,75 @@ void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
     local_irq_restore(flags);
 }
 
+#define IRQS_PER_ENABLER        32
+/**
+ * vgic_fetch_irq_enabled: assemble the enabled bits for a group of 32 IRQs
+ * @v: the VCPU for private IRQs, any VCPU of a domain for SPIs
+ * @first_irq: the first IRQ to be queried, must be aligned to 32
+ */
+uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq)
+{
+    struct pending_irq *pirqs[IRQS_PER_ENABLER];
+    unsigned long flags;
+    uint32_t reg = 0;
+    unsigned int i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
+
+    for ( i = 0; i < 32; i++ )
+        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) )
+            reg |= BIT(i);
+
+    vgic_unlock_irqs(pirqs, IRQS_PER_ENABLER);
+    local_irq_restore(flags);
+
+    return reg;
+}
+
+void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq,
+                           uint32_t value)
+{
+    struct pending_irq *pirqs[IRQS_PER_ENABLER];
+    unsigned long flags;
+    int i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
+
+    /* This goes backwards, as it unlocks the IRQs during the process */
+    for ( i = IRQS_PER_ENABLER - 1; i >= 0; i-- )
+    {
+        if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) &&
+             (value & BIT(i)) )
+            vgic_enable_irq_unlock(v->domain, pirqs[i]);
+        else
+            spin_unlock(&pirqs[i]->lock);
+    }
+    local_irq_restore(flags);
+}
+
+void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq,
+                            uint32_t value)
+{
+    struct pending_irq *pirqs[IRQS_PER_ENABLER];
+    unsigned long flags;
+    int i;
+
+    local_irq_save(flags);
+    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
+
+    /* This goes backwards, as it unlocks the IRQs during the process */
+    for ( i = 31; i >= 0; i-- )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) &&
+             (value & BIT(i)) )
+            vgic_disable_irq_unlock(v->domain, pirqs[i]);
+        else
+            spin_unlock(&pirqs[i]->lock);
+    }
+}
+
 bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags,
                       struct vcpu *new)
 {
@@ -437,40 +560,6 @@ void arch_move_irqs(struct vcpu *v)
     }
 }
 
-void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
-{
-    const unsigned long mask = r;
-    struct pending_irq *p;
-    struct irq_desc *desc;
-    unsigned int irq;
-    unsigned long flags;
-    int i = 0;
-    struct vcpu *v_target;
-
-    /* LPIs will never be disabled via this function. */
-    ASSERT(!is_lpi(32 * n + 31));
-
-    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
-        irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
-        v_target = vgic_get_target_vcpu(v->domain, p);
-
-        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
-        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        gic_remove_from_lr_pending(v_target, p);
-        desc = p->desc;
-        spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
-
-        if ( desc != NULL )
-        {
-            spin_lock_irqsave(&desc->lock, flags);
-            desc->handler->disable(desc);
-            spin_unlock_irqrestore(&desc->lock, flags);
-        }
-        i++;
-    }
-}
-
 void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs,
                     unsigned int first_irq, struct pending_irq **pirqs)
 {
@@ -491,50 +580,6 @@ void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs)
         spin_unlock(&pirqs[i]->lock);
 }
 
-void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
-{
-    const unsigned long mask = r;
-    struct pending_irq *p;
-    unsigned int irq, int_type;
-    unsigned long flags, vcpu_flags;
-    int i = 0;
-    struct vcpu *v_target;
-    struct domain *d = v->domain;
-
-    /* LPIs will never be enabled via this function. */
-    ASSERT(!is_lpi(32 * n + 31));
-
-    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
-        irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
-        v_target = vgic_get_target_vcpu(v->domain, p);
-        spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);
-        vgic_irq_lock(p, flags);
-        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ?
-                            IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING;
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v_target, irq, p->cur_priority);
-        vgic_irq_unlock(p, flags);
-        spin_unlock_irqrestore(&v_target->arch.vgic.lock, vcpu_flags);
-        if ( p->desc != NULL )
-        {
-            spin_lock_irqsave(&p->desc->lock, flags);
-            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
-            /*
-             * The irq cannot be a PPI, we only support delivery of SPIs
-             * to guests.
-             */
-            ASSERT(irq >= 32);
-            if ( irq_type_set_by_domain(d) )
-                gic_set_irq_type(p->desc, int_type);
-            p->desc->handler->enable(p->desc);
-            spin_unlock_irqrestore(&p->desc->lock, flags);
-        }
-        i++;
-    }
-}
-
 bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
                  int virq, const struct sgi_target *target)
 {
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index fe4d53d..233ff1f 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -109,9 +109,6 @@ struct vgic_irq_rank {
     spinlock_t lock; /* Covers access to all other members of this struct */
 
     uint8_t index;
-
-    uint32_t ienable;
-
 };
 
 struct sgi_target {
@@ -187,6 +184,11 @@ void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
 uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq);
 void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
                            uint32_t reg);
+uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq);
+void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq,
+                           uint32_t value);
+void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq,
+                            uint32_t value);
 
 enum gic_sgi_mode;
 
@@ -218,8 +220,6 @@ extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
 extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
 extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
 extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr);
-extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
-extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
 int vgic_v3_init(struct domain *d, int *mmio_count);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (19 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  2017-08-16 14:38   ` Julien Grall
  2017-07-21 20:00 ` [RFC PATCH v2 22/22] ARM: vGIC: remove remaining irq_rank code Andre Przywara
  21 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Instead of using an atomic access and hoping for the best, let's use
the new pending_irq lock now to make sure we read a sane version of
the target VCPU.
That still doesn't solve the problem mentioned in the comment, but
paves the way for future improvements.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-lpi.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 2306b58..9db26ed 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -140,20 +140,22 @@ void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq)
 {
     /*
      * TODO: this assumes that the struct pending_irq stays valid all of
-     * the time. We cannot properly protect this with the current locking
-     * scheme, but the future per-IRQ lock will solve this problem.
+     * the time. We cannot properly protect this with the current code,
+     * but a future refcounting will solve this problem.
      */
     struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
+    unsigned long flags;
     unsigned int vcpu_id;
 
     if ( !p )
         return;
 
-    vcpu_id = ACCESS_ONCE(p->vcpu_id);
-    if ( vcpu_id >= d->max_vcpus )
-          return;
+    vgic_irq_lock(p, flags);
+    vcpu_id = p->vcpu_id;
+    vgic_irq_unlock(p, flags);
 
-    vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
+    if ( vcpu_id < d->max_vcpus )
+        vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
 }
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [RFC PATCH v2 22/22] ARM: vGIC: remove remaining irq_rank code
  2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
                   ` (20 preceding siblings ...)
  2017-07-21 20:00 ` [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock Andre Przywara
@ 2017-07-21 20:00 ` Andre Przywara
  21 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-07-21 20:00 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Now that we no longer need the struct vgic_irq_rank, we can remove the
definition and all the helper functions.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c          | 54 --------------------------------------------
 xen/include/asm-arm/domain.h |  6 +----
 xen/include/asm-arm/vgic.h   | 48 ---------------------------------------
 3 files changed, 1 insertion(+), 107 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index dd969e2..8ce3ce5 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -32,35 +32,6 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
-static inline struct vgic_irq_rank *vgic_get_rank(struct vcpu *v, int rank)
-{
-    if ( rank == 0 )
-        return v->arch.vgic.private_irqs;
-    else if ( rank <= DOMAIN_NR_RANKS(v->domain) )
-        return &v->domain->arch.vgic.shared_irqs[rank - 1];
-    else
-        return NULL;
-}
-
-/*
- * Returns rank corresponding to a GICD_<FOO><n> register for
- * GICD_<FOO> with <b>-bits-per-interrupt.
- */
-struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n,
-                                              int s)
-{
-    int rank = REG_RANK_NR(b, (n >> s));
-
-    return vgic_get_rank(v, rank);
-}
-
-struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
-{
-    int rank = irq/32;
-
-    return vgic_get_rank(v, rank);
-}
-
 void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
                            unsigned int vcpu_id)
 {
@@ -75,14 +46,6 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
     p->vcpu_id = vcpu_id;
 }
 
-static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
-                           unsigned int vcpu)
-{
-    spin_lock_init(&rank->lock);
-
-    rank->index = index;
-}
-
 int domain_vgic_register(struct domain *d, int *mmio_count)
 {
     switch ( d->arch.vgic.version )
@@ -121,11 +84,6 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
 
     spin_lock_init(&d->arch.vgic.lock);
 
-    d->arch.vgic.shared_irqs =
-        xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
-    if ( d->arch.vgic.shared_irqs == NULL )
-        return -ENOMEM;
-
     d->arch.vgic.pending_irqs =
         xzalloc_array(struct pending_irq, d->arch.vgic.nr_spis);
     if ( d->arch.vgic.pending_irqs == NULL )
@@ -134,9 +92,6 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
     /* SPIs are routed to VCPU0 by default */
     for (i=0; i<d->arch.vgic.nr_spis; i++)
         vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32, 0);
-    /* SPIs are routed to VCPU0 by default */
-    for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
-        vgic_rank_init(&d->arch.vgic.shared_irqs[i], i + 1, 0);
 
     ret = d->arch.vgic.handler->domain_init(d);
     if ( ret )
@@ -178,7 +133,6 @@ void domain_vgic_free(struct domain *d)
     }
 
     d->arch.vgic.handler->domain_free(d);
-    xfree(d->arch.vgic.shared_irqs);
     xfree(d->arch.vgic.pending_irqs);
     xfree(d->arch.vgic.allocated_irqs);
 }
@@ -187,13 +141,6 @@ int vcpu_vgic_init(struct vcpu *v)
 {
     int i;
 
-    v->arch.vgic.private_irqs = xzalloc(struct vgic_irq_rank);
-    if ( v->arch.vgic.private_irqs == NULL )
-      return -ENOMEM;
-
-    /* SGIs/PPIs are always routed to this VCPU */
-    vgic_rank_init(v->arch.vgic.private_irqs, 0, v->vcpu_id);
-
     v->domain->arch.vgic.handler->vcpu_init(v);
 
     memset(&v->arch.vgic.pending_irqs, 0, sizeof(v->arch.vgic.pending_irqs));
@@ -210,7 +157,6 @@ int vcpu_vgic_init(struct vcpu *v)
 
 int vcpu_vgic_free(struct vcpu *v)
 {
-    xfree(v->arch.vgic.private_irqs);
     return 0;
 }
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 8dfc1d1..418400f 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -83,15 +83,12 @@ struct arch_domain
          * shared_irqs where each member contains its own locking.
          *
          * If both class of lock is required then this lock must be
-         * taken first. If multiple rank locks are required (including
-         * the per-vcpu private_irqs rank) then they must be taken in
-         * rank order.
+         * taken first.
          */
         spinlock_t lock;
         uint32_t ctlr;
         int nr_spis; /* Number of SPIs */
         unsigned long *allocated_irqs; /* bitmap of IRQs allocated */
-        struct vgic_irq_rank *shared_irqs;
         /*
          * SPIs are domain global, SGIs and PPIs are per-VCPU and stored in
          * struct arch_vcpu.
@@ -248,7 +245,6 @@ struct arch_vcpu
          * struct arch_domain.
          */
         struct pending_irq pending_irqs[32];
-        struct vgic_irq_rank *private_irqs;
 
         /* This list is ordered by IRQ priority and it is used to keep
          * track of the IRQs that the VGIC injected into the guest.
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 233ff1f..9c79c5e 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -101,16 +101,6 @@ struct pending_irq
     spinlock_t lock;
 };
 
-#define NR_INTERRUPT_PER_RANK   32
-#define INTERRUPT_RANK_MASK (NR_INTERRUPT_PER_RANK - 1)
-
-/* Represents state corresponding to a block of 32 interrupts */
-struct vgic_irq_rank {
-    spinlock_t lock; /* Covers access to all other members of this struct */
-
-    uint8_t index;
-};
-
 struct sgi_target {
     uint8_t aff1;
     uint16_t list;
@@ -137,42 +127,12 @@ struct vgic_ops {
     const unsigned int max_vcpus;
 };
 
-/* Number of ranks of interrupt registers for a domain */
-#define DOMAIN_NR_RANKS(d) (((d)->arch.vgic.nr_spis+31)/32)
-
 #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
 #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
 
 #define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags)
 #define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock, flags)
 
-#define vgic_lock_rank(v, r, flags)   spin_lock_irqsave(&(r)->lock, flags)
-#define vgic_unlock_rank(v, r, flags) spin_unlock_irqrestore(&(r)->lock, flags)
-
-/*
- * Rank containing GICD_<FOO><n> for GICD_<FOO> with
- * <b>-bits-per-interrupt
- */
-static inline int REG_RANK_NR(int b, uint32_t n)
-{
-    switch ( b )
-    {
-    /*
-     * IRQ ranks are of size 32. So n cannot be shifted beyond 5 for 32
-     * and above. For 64-bit n is already shifted DBAT_DOUBLE_WORD
-     * by the caller
-     */
-    case 64:
-    case 32: return n >> 5;
-    case 16: return n >> 4;
-    case 8: return n >> 3;
-    case 4: return n >> 2;
-    case 2: return n >> 1;
-    case 1: return n;
-    default: BUG();
-    }
-}
-
 void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq,
                     struct pending_irq **pirqs);
 void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
@@ -193,12 +153,6 @@ void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq,
 enum gic_sgi_mode;
 
 /*
- * Offset of GICD_<FOO><n> with its rank, for GICD_<FOO> size <s> with
- * <b>-bits-per-interrupt.
- */
-#define REG_RANK_INDEX(b, n, s) ((((n) >> s) & ((b)-1)) % 32)
-
-/*
  * In the moment vgic_num_irqs() just covers SPIs and the private IRQs,
  * as it's mostly used for allocating the pending_irq and irq_desc array,
  * in which LPIs don't participate.
@@ -217,8 +171,6 @@ extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
                                   unsigned int vcpu_id);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
-extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
-extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
 extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr);
 extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock
  2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
@ 2017-08-10 15:19   ` Julien Grall
  2017-08-10 15:35   ` Julien Grall
  1 sibling, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-10 15:19 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> Currently we protect the pending_irq structure with the corresponding
> VGIC VCPU lock. There are problems in certain corner cases (for
> instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
> which will protect the consistency of this structure independent from
> any VCPU.
> For now this just introduces and initializes the lock, also adds
> wrapper macros to simplify its usage (and help debugging).
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        |  1 +
>  xen/include/asm-arm/vgic.h | 11 +++++++++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 1e5107b..38dacd3 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>      memset(p, 0, sizeof(*p));
>      INIT_LIST_HEAD(&p->inflight);
>      INIT_LIST_HEAD(&p->lr_queue);
> +    spin_lock_init(&p->lock);
>      p->irq = virq;
>      p->lpi_vcpu_id = INVALID_VCPU_ID;
>  }
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index d4ed23d..1c38b9a 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -90,6 +90,14 @@ struct pending_irq
>       * TODO: when implementing irq migration, taking only the current
>       * vgic lock is not going to be enough. */
>      struct list_head lr_queue;
> +    /* The lock protects the consistency of this structure. A single status bit
> +     * can be read and/or set without holding the lock using the atomic
> +     * set_bit/clear_bit/test_bit functions, however accessing multiple bits or
> +     * relating to other members in this struct requires the lock.
> +     * The list_head members are protected by their corresponding VCPU lock,
> +     * it is not sufficient to hold this pending_irq lock here to query or
> +     * change list order or affiliation. */

Coding style:

/*
  * Foo
  * Bar
  */

> +    spinlock_t lock;
>  };
>
>  #define NR_INTERRUPT_PER_RANK   32
> @@ -156,6 +164,9 @@ struct vgic_ops {
>  #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
>  #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
>
> +#define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags)
> +#define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock, flags)
> +
>  #define vgic_lock_rank(v, r, flags)   spin_lock_irqsave(&(r)->lock, flags)
>  #define vgic_unlock_rank(v, r, flags) spin_unlock_irqrestore(&(r)->lock, flags)
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock
  2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
  2017-08-10 15:19   ` Julien Grall
@ 2017-08-10 15:35   ` Julien Grall
  2017-08-16 16:27     ` Andre Przywara
  1 sibling, 1 reply; 45+ messages in thread
From: Julien Grall @ 2017-08-10 15:35 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi,

On 21/07/17 20:59, Andre Przywara wrote:
> Currently we protect the pending_irq structure with the corresponding
> VGIC VCPU lock. There are problems in certain corner cases (for
> instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
> which will protect the consistency of this structure independent from
> any VCPU.
> For now this just introduces and initializes the lock, also adds
> wrapper macros to simplify its usage (and help debugging).
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        |  1 +
>  xen/include/asm-arm/vgic.h | 11 +++++++++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 1e5107b..38dacd3 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>      memset(p, 0, sizeof(*p));
>      INIT_LIST_HEAD(&p->inflight);
>      INIT_LIST_HEAD(&p->lr_queue);
> +    spin_lock_init(&p->lock);
>      p->irq = virq;
>      p->lpi_vcpu_id = INVALID_VCPU_ID;
>  }
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index d4ed23d..1c38b9a 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -90,6 +90,14 @@ struct pending_irq
>       * TODO: when implementing irq migration, taking only the current
>       * vgic lock is not going to be enough. */
>      struct list_head lr_queue;
> +    /* The lock protects the consistency of this structure. A single status bit
> +     * can be read and/or set without holding the lock using the atomic
> +     * set_bit/clear_bit/test_bit functions, however accessing multiple bits or
> +     * relating to other members in this struct requires the lock.
> +     * The list_head members are protected by their corresponding VCPU lock,
> +     * it is not sufficient to hold this pending_irq lock here to query or
> +     * change list order or affiliation. */

Actually, I have on question here. Do the vCPU lock sufficient to 
protect the list_head members. Or do you also mandate the pending_irq to 
be locked as well?

Also, it would be good to have the locking order documented maybe in 
docs/misc?

> +    spinlock_t lock;
>  };
>
>  #define NR_INTERRUPT_PER_RANK   32
> @@ -156,6 +164,9 @@ struct vgic_ops {
>  #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
>  #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
>
> +#define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags)
> +#define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock, flags)
> +
>  #define vgic_lock_rank(v, r, flags)   spin_lock_irqsave(&(r)->lock, flags)
>  #define vgic_unlock_rank(v, r, flags) spin_unlock_irqrestore(&(r)->lock, flags)
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq()
  2017-07-21 19:59 ` [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq() Andre Przywara
@ 2017-08-10 16:28   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-10 16:28 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> Currently there is a gic_raise_inflight_irq(), which serves the very
> special purpose of handling a newly injected interrupt while an older
> one is still handled. This has only one user, in vgic_vcpu_inject_irq().
>
> Now with the introduction of the pending_irq lock this will later on
> result in a nasty deadlock, which can only be solved properly by
> actually embedding the function into the caller (and dropping the lock
> later in-between).
>
> This has the admittedly hideous consequence of needing to export
> gic_update_one_lr(), but this will go away in a later stage of a rework.
> In this respect this patch is more a temporary kludge.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c        | 30 +-----------------------------
>  xen/arch/arm/vgic.c       | 11 ++++++++++-
>  xen/include/asm-arm/gic.h |  2 +-
>  3 files changed, 12 insertions(+), 31 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 2c99d71..5bd66a2 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -44,8 +44,6 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
>
>  #undef GIC_DEBUG
>
> -static void gic_update_one_lr(struct vcpu *v, int i);
> -
>  static const struct gic_hw_operations *gic_hw_ops;
>
>  void register_gic_ops(const struct gic_hw_operations *ops)
> @@ -416,32 +414,6 @@ void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p)
>      gic_remove_from_lr_pending(v, p);
>  }
>
> -void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
> -{
> -    struct pending_irq *n = irq_to_pending(v, virtual_irq);
> -
> -    /* If an LPI has been removed meanwhile, there is nothing left to raise. */
> -    if ( unlikely(!n) )
> -        return;
> -
> -    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> -
> -    /* Don't try to update the LR if the interrupt is disabled */
> -    if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
> -        return;
> -
> -    if ( list_empty(&n->lr_queue) )
> -    {
> -        if ( v == current )
> -            gic_update_one_lr(v, n->lr);
> -    }
> -#ifdef GIC_DEBUG
> -    else
> -        gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
> -                 virtual_irq, v->domain->domain_id, v->vcpu_id);
> -#endif
> -}
> -
>  /*
>   * Find an unused LR to insert an IRQ into, starting with the LR given
>   * by @lr. If this new interrupt is a PRISTINE LPI, scan the other LRs to
> @@ -503,7 +475,7 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>      gic_add_to_lr_pending(v, p);
>  }
>
> -static void gic_update_one_lr(struct vcpu *v, int i)
> +void gic_update_one_lr(struct vcpu *v, int i)
>  {
>      struct pending_irq *p;
>      int irq;
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 38dacd3..7b122cd 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -536,7 +536,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>
>      if ( !list_empty(&n->inflight) )
>      {
> -        gic_raise_inflight_irq(v, virq);
> +        bool update = test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) &&
> +                      list_empty(&n->lr_queue) && (v == current);
> +
> +        if ( update )
> +            gic_update_one_lr(v, n->lr);
> +#ifdef GIC_DEBUG
> +        else
> +            gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
> +                     n->irq, v->domain->domain_id, v->vcpu_id);

The previous code was only printing this message when the pending_irq is 
queued. Now you will print any time you don't update the LR.

> +#endif
>          goto out;
>      }
>
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 6203dc5..cf8b8fb 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -237,12 +237,12 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
>
>  extern void gic_inject(void);
>  extern void gic_clear_pending_irqs(struct vcpu *v);
> +extern void gic_update_one_lr(struct vcpu *v, int lr);
>  extern int gic_events_need_delivery(void);
>
>  extern void init_maintenance_interrupt(void);
>  extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
>          unsigned int priority);
> -extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq);
>  extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
>  extern void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p);
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-07-21 19:59 ` [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter Andre Przywara
@ 2017-08-11 14:10   ` Julien Grall
  2017-08-16 16:48     ` Andre Przywara
  2017-08-17 17:06     ` Andre Przywara
  0 siblings, 2 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-11 14:10 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> Since the GICs MMIO access always covers a number of IRQs at once,
> introduce wrapper functions which loop over those IRQs, take their
> locks and read or update the priority values.
> This will be used in a later patch.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/vgic.h |  5 +++++
>  2 files changed, 42 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 434b7e2..b2c9632 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
>      return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
>  }
>
> +#define MAX_IRQS_PER_IPRIORITYR 4

The name gives the impression that you may have IPRIORITYR with only 1 
IRQ. But this is not true. The registers is always 4. However, you are 
able to access using byte or word.

> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,

I am well aware that the vgic code is mixing between virq and irq. 
Moving forward, we should use virq to avoid confusion.

> +                                 unsigned int first_irq)

Please stay consistent, with the naming. Either nr_irqs/first_irq or 
nrirqs/firstirq. But not a mix.

Also, it makes more sense to describe first the start then number.

> +{
> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
> +    unsigned long flags;
> +    uint32_t ret = 0, i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);

I am not convinced on the usefulness of taking all the locks in one go. 
At one point in the time, you only need to lock a given pending_irq.

> +
> +    for ( i = 0; i < nrirqs; i++ )
> +        ret |= pirqs[i]->priority << (i * 8);

Please avoid open-coding number.

> +
> +    vgic_unlock_irqs(pirqs, nrirqs);
> +    local_irq_restore(flags);
> +
> +    return ret;
> +}
> +
> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
> +                             unsigned int first_irq, uint32_t value)
> +{
> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
> +    unsigned long flags;
> +    unsigned int i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
> +
> +    for ( i = 0; i < nrirqs; i++, value >>= 8 )

Same here.

> +        pirqs[i]->priority = value & 0xff;
> +
> +    vgic_unlock_irqs(pirqs, nrirqs);
> +    local_irq_restore(flags);
> +}
> +
>  bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
>  {
>      unsigned long flags;
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index ecf4969..f3791c8 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -198,6 +198,11 @@ void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq,
>                      struct pending_irq **pirqs);
>  void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
>
> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
> +                                 unsigned int first_irq);
> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
> +                             unsigned int first_irq, uint32_t reg);
> +
>  enum gic_sgi_mode;
>
>  /*
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq
  2017-07-21 19:59 ` [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq Andre Przywara
@ 2017-08-11 14:39   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-11 14:39 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> So far a virtual interrupt's priority is stored in the irq_rank
> structure, which covers multiple IRQs and has a single lock for this
> group.
> Generalize the already existing priority variable in struct pending_irq
> to not only cover LPIs, but every IRQ. Access to this value is protected
> by the per-IRQ lock.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v2.c     | 34 ++++++----------------------------
>  xen/arch/arm/vgic-v3.c     | 36 ++++++++----------------------------
>  xen/arch/arm/vgic.c        | 41 +++++++++++++++++------------------------
>  xen/include/asm-arm/vgic.h | 10 ----------
>  4 files changed, 31 insertions(+), 90 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index cf4ab89..ed7ff3b 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -171,6 +171,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>      struct vgic_irq_rank *rank;
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
>      unsigned long flags;
> +    unsigned int irq;

I am going to repeat the comment I made on v1, s/irq/virq.

>
>      perfc_incr(vgicd_reads);
>
> @@ -250,22 +251,10 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>          goto read_as_zero;
>
>      case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
> -    {
> -        uint32_t ipriorityr;
> -        uint8_t rank_index;
> -
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        rank_index = REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
> -
> -        vgic_lock_rank(v, rank, flags);
> -        ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
> -        vgic_unlock_rank(v, rank, flags);
> -        *r = vreg_reg32_extract(ipriorityr, info);
> -
> +        irq = gicd_reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
> +        *r = vgic_fetch_irq_priority(v, irq, (dabt.size == DABT_BYTE) ? 1 : 4);

vgic_fetch_irq_priority assumes that there is always a pending_irq 
associated to a given vIRQ. However, this is not true for any vIRQ above 
the maximum supported by the vGIC (depends on the configuration).

This was protected by the check rank == NULL that now disappears.

>          return 1;
> -    }
>
>      case VREG32(0x7FC):
>          goto read_reserved;
> @@ -415,6 +404,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
>      uint32_t tr;
>      unsigned long flags;
> +    unsigned int irq;

Same here for the name.

>
>      perfc_incr(vgicd_writes);
>
> @@ -498,23 +488,11 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>          goto write_ignore_32;
>
>      case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
> -    {
> -        uint32_t *ipriorityr, priority;
> -
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
> -        if ( rank == NULL) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        ipriorityr = &rank->ipriorityr[REG_RANK_INDEX(8,
> -                                                      gicd_reg - GICD_IPRIORITYR,
> -                                                      DABT_WORD)];
> -        priority = ACCESS_ONCE(*ipriorityr);
> -        vreg_reg32_update(&priority, r, info);
> -        ACCESS_ONCE(*ipriorityr) = priority;
>
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = gicd_reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
> +        vgic_store_irq_priority(v, (dabt.size == DABT_BYTE) ? 1 : 4, irq, r);

Same here for the check.

>          return 1;
> -    }
>
>      case VREG32(0x7FC):
>          goto write_reserved;
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index ad9019e..e58e77e 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -677,6 +677,7 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
>      struct hsr_dabt dabt = info->dabt;
>      struct vgic_irq_rank *rank;
>      unsigned long flags;
> +    unsigned int irq;

Same here for the name.

>
>      switch ( reg )
>      {
> @@ -714,23 +715,11 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
>          goto read_as_zero;
>
>      case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
> -    {
> -        uint32_t ipriorityr;
> -        uint8_t rank_index;
> -
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        rank_index = REG_RANK_INDEX(8, reg - GICD_IPRIORITYR, DABT_WORD);
> -
> -        vgic_lock_rank(v, rank, flags);
> -        ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
> -        vgic_unlock_rank(v, rank, flags);
> -
> -        *r = vreg_reg32_extract(ipriorityr, info);
> -
> +        irq = reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;

You want to use vgic_num_irqs(v->domain) here. It might be nice to have 
an helper checking the validity of an interrupt as I suspect you will 
need this in quite a few places now.

> +        *r = vgic_fetch_irq_priority(v, irq, (dabt.size == DABT_BYTE) ? 1 : 4);
>          return 1;
> -    }
>
>      case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
>      {
> @@ -774,6 +763,7 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
>      struct vgic_irq_rank *rank;
>      uint32_t tr;
>      unsigned long flags;
> +    unsigned int irq;

Same for the name.

>
>      switch ( reg )
>      {
> @@ -831,21 +821,11 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
>          goto write_ignore_32;
>
>      case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
> -    {
> -        uint32_t *ipriorityr, priority;
> -
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
> -        if ( rank == NULL ) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        ipriorityr = &rank->ipriorityr[REG_RANK_INDEX(8, reg - GICD_IPRIORITYR,
> -                                                      DABT_WORD)];
> -        priority = ACCESS_ONCE(*ipriorityr);
> -        vreg_reg32_update(&priority, r, info);
> -        ACCESS_ONCE(*ipriorityr) = priority;
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = reg - GICD_IPRIORITYR; /* 8 bit per IRQ, so IRQ = offset */
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;

Ditto.

> +        vgic_store_irq_priority(v, (dabt.size == DABT_BYTE) ? 1 : 4, irq, r);
>          return 1;
> -    }
>
>      case VREG32(GICD_ICFGR): /* Restricted to configure SGIs */
>          goto write_ignore_32;
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index b2c9632..ddcd99b 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -231,18 +231,6 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
>      return v->domain->vcpu[target];
>  }
>
> -static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
> -{
> -    struct vgic_irq_rank *rank;
> -
> -    /* LPIs don't have a rank, also store their priority separately. */
> -    if ( is_lpi(virq) )
> -        return v->domain->arch.vgic.handler->lpi_get_priority(v->domain, virq);
> -
> -    rank = vgic_rank_irq(v, virq);
> -    return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
> -}
> -
>  #define MAX_IRQS_PER_IPRIORITYR 4
>  uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>                                   unsigned int first_irq)
> @@ -567,37 +555,40 @@ void vgic_clear_pending_irqs(struct vcpu *v)
>
>  void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>  {
> -    uint8_t priority;
>      struct pending_irq *iter, *n;
> -    unsigned long flags;
> +    unsigned long flags, vcpu_flags;

This renaming of flags -> vcpu_flags seems unwarrant to me. But it looks 
like to me that you need two set of flags because vgic_irq_lock requires 
to take a flag.

Technically we don't care about the flags for the second has we know the 
IRQ are disabled. So I would introduce a new helper that simply lock + 
maybe an ASSERT to check the IRQ was previously disabled. Something like:

ASSERT(!local_irq_enabled());
spin_lock(....);

You would also need the counter-part to unlock it.

>      bool running;
>
> -    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);
>
>      n = irq_to_pending(v, virq);
>      /* If an LPI has been removed, there is nothing to inject here. */
>      if ( unlikely(!n) )
>      {
> -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
>          return;
>      }
>
>      /* vcpu offline */
>      if ( test_bit(_VPF_down, &v->pause_flags) )
>      {
> -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
>          return;
>      }
>
> +    vgic_irq_lock(n, flags);

It looks like to me that this locking should have been introduced in a 
separate patch with the associated description. Because it is not really 
related to that patch (you protected more than the priority). And I 
think both the rank and pending_irq could cope. None of the patch before 
would make it worst.

> +
>      set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
>
>      if ( !list_empty(&n->inflight) )
>      {
>          bool update = test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) &&
>                        list_empty(&n->lr_queue) && (v == current);
> +        int lr = ACCESS_ONCE(n->lr);

Why do you need the ACCESS_ONCE here? This does not seem related to this 
patch.

>
> +        vgic_irq_unlock(n, flags);
>          if ( update )
> -            gic_update_one_lr(v, n->lr);
> +            gic_update_one_lr(v, lr);
>  #ifdef GIC_DEBUG
>          else
>              gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
> @@ -606,24 +597,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>          goto out;
>      }
>
> -    priority = vgic_get_virq_priority(v, virq);
> -    n->cur_priority = priority;
> +    n->cur_priority = n->priority;
>
>      /* the irq is enabled */
>      if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
> -        gic_raise_guest_irq(v, virq, priority);
> +        gic_raise_guest_irq(v, virq, n->cur_priority);
>
>      list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
>      {
> -        if ( iter->cur_priority > priority )
> +        if ( iter->cur_priority > n->cur_priority )

If I am not mistaken cur_priority is protected by the vCPU lock and not 
the pending_irq lock. If so, the comment in patch #1 should be updated.

>          {
>              list_add_tail(&n->inflight, &iter->inflight);
> -            goto out;
> +            goto out_unlock_irq;
>          }
>      }
>      list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
> +
> +out_unlock_irq:
> +    vgic_irq_unlock(n, flags);
>  out:
> -    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
>      /* we have a new higher priority irq, inject it into the guest */
>      running = v->is_running;
>      vcpu_unblock(v);
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index f3791c8..59d52c6 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -113,16 +113,6 @@ struct vgic_irq_rank {
>      uint32_t icfg[2];
>
>      /*
> -     * Provide efficient access to the priority of an vIRQ while keeping
> -     * the emulation simple.
> -     * Note, this is working fine as long as Xen is using little endian.
> -     */
> -    union {
> -        uint8_t priority[32];
> -        uint32_t ipriorityr[8];
> -    };
> -
> -    /*
>       * It's more convenient to store a target VCPU per vIRQ
>       * than the register ITARGETSR/IROUTER itself.
>       * Use atomic operations to read/write the vcpu fields to avoid
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock
  2017-07-21 19:59 ` [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock Andre Przywara
@ 2017-08-11 14:43   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-11 14:43 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> As the priority value is now officially a member of struct pending_irq,
> we need to take its lock when manipulating it via ITS commands.
> Make sure we take the IRQ lock after the VCPU lock when we need both.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 26 +++++++++++++++++++-------
>  1 file changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 66095d4..705708a 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -402,6 +402,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
>      uint8_t property;
>      int ret;
>
> +    ASSERT(spin_is_locked(&p->lock));
>      /*
>       * If no redistributor has its LPIs enabled yet, we can't access the
>       * property table. In this case we just can't update the properties,
> @@ -419,7 +420,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
>      if ( ret )
>          return ret;
>
> -    write_atomic(&p->priority, property & LPI_PROP_PRIO_MASK);
> +    p->priority = property & LPI_PROP_PRIO_MASK;
>
>      if ( property & LPI_PROP_ENABLED )
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -457,7 +458,7 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
>      uint32_t devid = its_cmd_get_deviceid(cmdptr);
>      uint32_t eventid = its_cmd_get_id(cmdptr);
>      struct pending_irq *p;
> -    unsigned long flags;
> +    unsigned long flags, vcpu_flags;

Same remark as patch #8 for the vcpu_flags and the locking.

>      struct vcpu *vcpu;
>      uint32_t vlpi;
>      int ret = -1;
> @@ -485,7 +486,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
>      if ( unlikely(!p) )
>          goto out_unlock_its;
>
> -    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags);
> +    vgic_irq_lock(p, flags);
>
>      /* Read the property table and update our cached status. */
>      if ( update_lpi_property(d, p) )
> @@ -497,7 +499,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
>      ret = 0;
>
>  out_unlock:
> -    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +    vgic_irq_unlock(p, flags);
> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags);
>
>  out_unlock_its:
>      spin_unlock(&its->its_lock);
> @@ -517,7 +520,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
>      struct pending_irq *pirqs[16];
>      uint64_t vlpi = 0;          /* 64-bit to catch overflows */
>      unsigned int nr_lpis, i;
> -    unsigned long flags;
> +    unsigned long flags, vcpu_flags;
>      int ret = 0;
>
>      /*
> @@ -542,7 +545,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
>      vcpu = get_vcpu_from_collection(its, collid);
>      spin_unlock(&its->its_lock);
>
> -    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags);
>      read_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
>
>      do
> @@ -555,9 +558,13 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
>
>          for ( i = 0; i < nr_lpis; i++ )
>          {
> +            vgic_irq_lock(pirqs[i], flags);
>              /* We only care about LPIs on our VCPU. */
>              if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
> +            {
> +                vgic_irq_unlock(pirqs[i], flags);

This locking does not seem to be related to the priority.

>                  continue;
> +            }
>
>              vlpi = pirqs[i]->irq;
>              /* If that fails for a single LPI, carry on to handle the rest. */
> @@ -566,6 +573,8 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
>                  update_lpi_vgic_status(vcpu, pirqs[i]);
>              else
>                  ret = err;
> +
> +            vgic_irq_unlock(pirqs[i], flags);
>          }
>      /*
>       * Loop over the next gang of pending_irqs until we reached the end of
> @@ -576,7 +585,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
>                (nr_lpis == ARRAY_SIZE(pirqs)) );
>
>      read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> -    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags);
>
>      return ret;
>  }
> @@ -712,6 +721,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>      uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
>      uint16_t collid = its_cmd_get_collection(cmdptr);
>      struct pending_irq *pirq;
> +    unsigned long flags;
>      struct vcpu *vcpu = NULL;
>      int ret = -1;
>
> @@ -765,7 +775,9 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>       * We don't need the VGIC VCPU lock here, because the pending_irq isn't
>       * in the radix tree yet.
>       */
> +    vgic_irq_lock(pirq, flags);
>      ret = update_lpi_property(its->d, pirq);
> +    vgic_irq_unlock(pirq, flags);
>      if ( ret )
>          goto out_remove_host_entry;
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() with pending_irq lock
  2017-07-21 19:59 ` [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() " Andre Przywara
@ 2017-08-15 10:59   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-15 10:59 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> When putting a (pending) IRQ into an LR, we should better make sure that
> no-one changes it behind our back. So make sure we take the pending_irq
> lock. This bubbles up to all users of gic_add_to_lr_pending() and
> gic_raise_guest_irq().
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 8dec736..df89530 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -383,6 +383,7 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
>      struct pending_irq *iter;
>
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
> +    ASSERT(spin_is_locked(&n->lock));

I think we need a similar assert in gic_raise_guest_irq and gic_set_lr.

>
>      if ( !list_empty(&n->lr_queue) )
>          return;
> @@ -480,6 +481,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
>      struct pending_irq *p;
>      int irq;
>      struct gic_lr lr_val;
> +    unsigned long flags;
>
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>      ASSERT(!local_irq_is_enabled());
> @@ -534,6 +536,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
>          gic_hw_ops->clear_lr(i);
>          clear_bit(i, &this_cpu(lr_mask));
>
> +        vgic_irq_lock(p, flags);
>          if ( p->desc != NULL )
>              clear_bit(_IRQ_INPROGRESS, &p->desc->status);
>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> @@ -559,6 +562,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
>                  clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
>              }
>          }
> +        vgic_irq_unlock(p, flags);
>      }
>  }
>
> @@ -592,11 +596,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>      int lr = 0;
>      struct pending_irq *p, *t, *p_r;
>      struct list_head *inflight_r;
> -    unsigned long flags;
> +    unsigned long flags, vcpu_flags;
>      unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>      int lrs = nr_lrs;
>
> -    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);

See my comment on previous patches about the renaming.

>
>      if ( list_empty(&v->arch.vgic.lr_pending) )
>          goto out;
> @@ -621,16 +625,20 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>              goto out;
>
>  found:
> +            vgic_irq_lock(p_r, flags);
>              lr = p_r->lr;
>              p_r->lr = GIC_INVALID_LR;
>              set_bit(GIC_IRQ_GUEST_QUEUED, &p_r->status);
>              clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
>              gic_add_to_lr_pending(v, p_r);
>              inflight_r = &p_r->inflight;
> +            vgic_irq_unlock(p_r, flags);

Some description in the commit message is necessary to explain why the 
lock is protecting more than what the patch is meant to do (i.e just 
protect gic_set_lr).

>          }
>
> +        vgic_irq_lock(p, flags);
>          gic_set_lr(lr, p, GICH_LR_PENDING);
>          list_del_init(&p->lr_queue);
> +        vgic_irq_unlock(p, flags);

Ditto. In this case, I thought the lists were protected by the the vCPU 
lock. So technically list_del_init(...) could be outside of the lock.

>          set_bit(lr, &this_cpu(lr_mask));
>
>          /* We can only evict nr_lrs entries */
> @@ -640,7 +648,7 @@ found:
>      }
>
>  out:
> -    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
>  }
>
>  void gic_clear_pending_irqs(struct vcpu *v)
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() with pending_irq lock
  2017-07-21 19:59 ` [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() " Andre Przywara
@ 2017-08-15 11:11   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-15 11:11 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 20:59, Andre Przywara wrote:
> gic_events_need_delivery() reads the cur_priority field twice, also

This does not seem inline with what we discussed. I.e cur_priority can 
be read without the pending_irq lock, assuming proper barrier are around.

If the problem is reading it twice, then an ACCESS_ONCE(...) should fix it.

> relies on the consistency of status bits.

status has been designed to be used without lock. If this is not the 
case anymore, then we should document it.

In this particular case, I need a bit more context to see why this lock 
is necessary. IHMO, there are other way to avoid it, such as reading 
both the priority and the enabled bit before hand.

> So it should take pending_irq lock.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c | 24 +++++++++++++-----------
>  1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index df89530..9637682 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -666,7 +666,7 @@ int gic_events_need_delivery(void)
>  {
>      struct vcpu *v = current;
>      struct pending_irq *p;
> -    unsigned long flags;
> +    unsigned long flags, vcpu_flags;
>      const unsigned long apr = gic_hw_ops->read_apr(0);
>      int mask_priority;
>      int active_priority;
> @@ -675,7 +675,7 @@ int gic_events_need_delivery(void)
>      mask_priority = gic_hw_ops->read_vmcr_priority();
>      active_priority = find_next_bit(&apr, 32, 0);
>
> -    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +    spin_lock_irqsave(&v->arch.vgic.lock, vcpu_flags);
>
>      /* TODO: We order the guest irqs by priority, but we don't change
>       * the priority of host irqs. */
> @@ -684,19 +684,21 @@ int gic_events_need_delivery(void)
>       * ordered by priority */
>      list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
>      {
> -        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= mask_priority )
> -            goto out;
> -        if ( GIC_PRI_TO_GUEST(p->cur_priority) >= active_priority )
> -            goto out;
> -        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
> +        vgic_irq_lock(p, flags);
> +        if ( GIC_PRI_TO_GUEST(p->cur_priority) < mask_priority &&
> +             GIC_PRI_TO_GUEST(p->cur_priority) < active_priority &&
> +             !test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
>          {
> -            rc = 1;
> -            goto out;
> +            vgic_irq_unlock(p, flags);
> +            continue;
>          }
> +
> +        rc = test_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +        vgic_irq_unlock(p, flags);
> +        break;
>      }
>
> -out:
> -    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, vcpu_flags);
>      return rc;
>  }
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() with pending_irq lock
  2017-07-21 20:00 ` [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() " Andre Przywara
@ 2017-08-15 11:17   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-15 11:17 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> When we return from a domain with the active bit set in an LR,
> we update our pending_irq accordingly. This touches multiple status
> bits, so requires the pending_irq lock.

The commit title says "protect gic_update_one_lr()" but here you only 
mention about one path. Please explain why the other are safe.

>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 9637682..84b282b 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -508,6 +508,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
>
>      if ( lr_val.state & GICH_LR_ACTIVE )
>      {
> +        vgic_irq_lock(p, flags);

The function has an ASSERT to check the IRQs are disabled. So it is not 
necessary to save/restore flags.

>          set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
>          if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
>               test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) )
> @@ -521,6 +522,7 @@ void gic_update_one_lr(struct vcpu *v, int i)
>                  gdprintk(XENLOG_WARNING, "unable to inject hw irq=%d into d%dv%d: already active in LR%d\n",
>                           irq, v->domain->domain_id, v->vcpu_id, i);
>          }
> +        vgic_irq_unlock(p, flags);
>      }
>      else if ( lr_val.state & GICH_LR_PENDING )
>      {
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper
  2017-07-21 20:00 ` [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper Andre Przywara
@ 2017-08-15 12:31   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-15 12:31 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> For LPIs we stored the priority value in struct pending_irq, but all
> other type of IRQs were using the irq_rank structure for that.
> Now that every IRQ using pending_irq, we can remove the special handling
> we had in place for LPIs and just use the now unified access wrappers.

Can we move it closer to the patch (#9 I think) removing the last 
reference on the wrapper?

Cheers,

>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v2.c     |  7 -------
>  xen/arch/arm/vgic-v3.c     | 11 -----------
>  xen/include/asm-arm/vgic.h |  1 -
>  3 files changed, 19 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index ed7ff3b..a3fd500 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -690,18 +690,11 @@ static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
>      BUG();
>  }
>
> -static int vgic_v2_lpi_get_priority(struct domain *d, unsigned int vlpi)
> -{
> -    /* Dummy function, no LPIs on a VGICv2. */
> -    BUG();
> -}
> -
>  static const struct vgic_ops vgic_v2_ops = {
>      .vcpu_init   = vgic_v2_vcpu_init,
>      .domain_init = vgic_v2_domain_init,
>      .domain_free = vgic_v2_domain_free,
>      .lpi_to_pending = vgic_v2_lpi_to_pending,
> -    .lpi_get_priority = vgic_v2_lpi_get_priority,
>      .max_vcpus = 8,
>  };
>
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index e58e77e..d3356ae 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1757,23 +1757,12 @@ static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
>      return pirq;
>  }
>
> -/* Retrieve the priority of an LPI from its struct pending_irq. */
> -static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
> -{
> -    struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
> -
> -    ASSERT(p);
> -
> -    return p->priority;
> -}
> -
>  static const struct vgic_ops v3_ops = {
>      .vcpu_init   = vgic_v3_vcpu_init,
>      .domain_init = vgic_v3_domain_init,
>      .domain_free = vgic_v3_domain_free,
>      .emulate_reg  = vgic_v3_emulate_reg,
>      .lpi_to_pending = vgic_v3_lpi_to_pending,
> -    .lpi_get_priority = vgic_v3_lpi_get_priority,
>      /*
>       * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
>       * that can be supported is up to 4096(==256*16) in theory.
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 59d52c6..6343c95 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -143,7 +143,6 @@ struct vgic_ops {
>      bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
>      /* lookup the struct pending_irq for a given LPI interrupt */
>      struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
> -    int (*lpi_get_priority)(struct domain *d, uint32_t vlpi);
>      /* Maximum number of vCPU supported */
>      const unsigned int max_vcpus;
>  };
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq
  2017-07-21 20:00 ` [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq Andre Przywara
@ 2017-08-16 11:13   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 11:13 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> The IRQ configuration (level or edge triggered) for a group of IRQs
> are still stored in the irq_rank structure.
> Introduce a new bit called GIC_IRQ_GUEST_LEVEL in the "status" field,
> which holds that information.
> Remove the storage from the irq_rank and use the existing wrappers to
> store and retrieve the configuration bit for multiple IRQs.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v2.c     | 21 +++---------
>  xen/arch/arm/vgic-v3.c     | 25 ++++----------
>  xen/arch/arm/vgic.c        | 81 +++++++++++++++++++++++++++++++++-------------
>  xen/include/asm-arm/vgic.h |  5 ++-
>  4 files changed, 73 insertions(+), 59 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index a3fd500..0c8a598 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -278,20 +278,12 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>          goto read_reserved;
>
>      case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
> -    {
> -        uint32_t icfgr;
> -
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 2, gicd_reg - GICD_ICFGR, DABT_WORD);
> -        if ( rank == NULL) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        icfgr = rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR, DABT_WORD)];
> -        vgic_unlock_rank(v, rank, flags);
>
> -        *r = vreg_reg32_extract(icfgr, info);
> +        irq = (gicd_reg - GICD_ICFGR) * 4;
> +        *r = vgic_fetch_irq_config(v, irq);
>
>          return 1;
> -    }
>
>      case VRANGE32(0xD00, 0xDFC):
>          goto read_impl_defined;
> @@ -529,13 +521,8 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>
>      case VRANGE32(GICD_ICFGR2, GICD_ICFGRN): /* SPIs */
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 2, gicd_reg - GICD_ICFGR, DABT_WORD);
> -        if ( rank == NULL) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        vreg_reg32_update(&rank->icfg[REG_RANK_INDEX(2, gicd_reg - GICD_ICFGR,
> -                                                     DABT_WORD)],
> -                          r, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (gicd_reg - GICD_ICFGR) * 4; /* 2 bit per IRQ */

s/bit/bits/

> +        vgic_store_irq_config(v, irq, r);
>          return 1;
>
>      case VRANGE32(0xD00, 0xDFC):
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index d3356ae..e9e36eb 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -722,20 +722,11 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
>          return 1;
>
>      case VRANGE32(GICD_ICFGR, GICD_ICFGRN):
> -    {
> -        uint32_t icfgr;
> -
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 2, reg - GICD_ICFGR, DABT_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        icfgr = rank->icfg[REG_RANK_INDEX(2, reg - GICD_ICFGR, DABT_WORD)];
> -        vgic_unlock_rank(v, rank, flags);
> -
> -        *r = vreg_reg32_extract(icfgr, info);
> -
> +        irq = (reg - GICD_ICFGR) * 4;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        *r = vgic_fetch_irq_config(v, irq);
>          return 1;
> -    }
>
>      default:
>          printk(XENLOG_G_ERR
> @@ -834,13 +825,9 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
>          /* ICFGR1 for PPI's, which is implementation defined
>             if ICFGR1 is programmable or not. We chose to program */
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 2, reg - GICD_ICFGR, DABT_WORD);
> -        if ( rank == NULL ) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        vreg_reg32_update(&rank->icfg[REG_RANK_INDEX(2, reg - GICD_ICFGR,
> -                                                     DABT_WORD)],
> -                          r, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (reg - GICD_ICFGR) * 4;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +        vgic_store_irq_config(v, irq, r);
>          return 1;
>
>      default:
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index ddcd99b..e5a4765 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -268,6 +268,55 @@ void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>      local_irq_restore(flags);
>  }
>
> +#define IRQS_PER_CFGR   16
> +/**
> + * vgic_fetch_irq_config: assemble the configuration bits for a group of 16 IRQs
> + * @v: the VCPU for private IRQs, any VCPU of a domain for SPIs
> + * @first_irq: the first IRQ to be queried, must be aligned to 16
> + */
> +uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq)
> +{
> +    struct pending_irq *pirqs[IRQS_PER_CFGR];
> +    unsigned long flags;
> +    uint32_t ret = 0, i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, IRQS_PER_CFGR, first_irq, pirqs);
> +
> +    for ( i = 0; i < IRQS_PER_CFGR; i++ )
> +        if ( test_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status) )
> +            ret |= 1 << (i * 2);
> +        else
> +            ret |= 3 << (i * 2);
> +
> +    vgic_unlock_irqs(pirqs, IRQS_PER_CFGR);
> +    local_irq_restore(flags);
> +
> +    return ret;
> +}
> +
> +void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
> +                           uint32_t value)
> +{
> +    struct pending_irq *pirqs[IRQS_PER_CFGR];
> +    unsigned long flags;
> +    unsigned int i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, IRQS_PER_CFGR, first_irq, pirqs);
> +
> +    for ( i = 0; i < IRQS_PER_CFGR; i++, value >>= 2 )
> +    {
> +        if ( (value & 0x3) > 1 )
> +            clear_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status);
> +        else
> +            set_bit(GIC_IRQ_GUEST_LEVEL, &pirqs[i]->status);
> +    }
> +
> +    vgic_unlock_irqs(pirqs, IRQS_PER_CFGR);
> +    local_irq_restore(flags);
> +}
> +
>  bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
>  {
>      unsigned long flags;
> @@ -384,22 +433,6 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>      }
>  }
>
> -#define VGIC_ICFG_MASK(intr) (1 << ((2 * ((intr) % 16)) + 1))
> -
> -/* The function should be called with the rank lock taken */
> -static inline unsigned int vgic_get_virq_type(struct vcpu *v, int n, int index)
> -{
> -    struct vgic_irq_rank *r = vgic_get_rank(v, n);
> -    uint32_t tr = r->icfg[index >> 4];
> -
> -    ASSERT(spin_is_locked(&r->lock));
> -
> -    if ( tr & VGIC_ICFG_MASK(index) )
> -        return IRQ_TYPE_EDGE_RISING;
> -    else
> -        return IRQ_TYPE_LEVEL_HIGH;
> -}
> -
>  void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs,
>                      unsigned int first_irq, struct pending_irq **pirqs)
>  {
> @@ -424,8 +457,8 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
>      const unsigned long mask = r;
>      struct pending_irq *p;
> -    unsigned int irq;
> -    unsigned long flags;
> +    unsigned int irq, int_type;
> +    unsigned long flags, vcpu_flags;
>      int i = 0;
>      struct vcpu *v_target;
>      struct domain *d = v->domain;
> @@ -436,23 +469,27 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          v_target = vgic_get_target_vcpu(v, irq);
> -        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> +        spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);

Same my comments about the flags in previous patches.

>          p = irq_to_pending(v_target, irq);
> +        vgic_irq_lock(p, flags);

Same as before about the pending_lock. You need to explain why it 
protects much more than the configuration.

>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +        int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ?
> +                            IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING;
>          if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
>              gic_raise_guest_irq(v_target, irq, p->cur_priority);
> -        spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> +        vgic_irq_unlock(p, flags);
> +        spin_unlock_irqrestore(&v_target->arch.vgic.lock, vcpu_flags);
>          if ( p->desc != NULL )
>          {
> -            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>              spin_lock_irqsave(&p->desc->lock, flags);
> +            irq_set_affinity(p->desc, cpumask_of(v_target->processor));

Why this is moved?

>              /*
>               * The irq cannot be a PPI, we only support delivery of SPIs
>               * to guests.
>               */
>              ASSERT(irq >= 32);
>              if ( irq_type_set_by_domain(d) )
> -                gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
> +                gic_set_irq_type(p->desc, int_type);
>              p->desc->handler->enable(p->desc);
>              spin_unlock_irqrestore(&p->desc->lock, flags);
>          }
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 6343c95..14c22b2 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -73,6 +73,7 @@ struct pending_irq
>  #define GIC_IRQ_GUEST_ENABLED  3
>  #define GIC_IRQ_GUEST_MIGRATING   4
>  #define GIC_IRQ_GUEST_PRISTINE_LPI  5
> +#define GIC_IRQ_GUEST_LEVEL    6
>      unsigned long status;
>      struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
>      unsigned int irq;
> @@ -110,7 +111,6 @@ struct vgic_irq_rank {
>      uint8_t index;
>
>      uint32_t ienable;
> -    uint32_t icfg[2];
>
>      /*
>       * It's more convenient to store a target VCPU per vIRQ
> @@ -191,6 +191,9 @@ uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>                                   unsigned int first_irq);
>  void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>                               unsigned int first_irq, uint32_t reg);
> +uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq);
> +void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
> +                           uint32_t reg);
>
>  enum gic_sgi_mode;
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq()
  2017-07-21 20:00 ` [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq() Andre Przywara
@ 2017-08-16 11:23   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 11:23 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> Since a VCPU can own multiple IRQs, the natural locking order is to take
> a VCPU lock first, then the individual per-IRQ locks.
> However there are situations where the target VCPU is not known without
> looking into the struct pending_irq first, which usually means we need to
> take the IRQ lock first.
> To solve this problem, we provide a function called vgic_lock_vcpu_irq(),
> which takes a locked struct pending_irq() and returns with *both* the
> VCPU and the IRQ lock held.
> This is done by looking up the target VCPU, then briefly dropping the
> IRQ lock, taking the VCPU lock, then grabbing the per-IRQ lock again.
> Before returning there is a check whether something has changed in the
> brief period where we didn't hold the IRQ lock, retrying in this (very
> rare) case.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 1ba0010..0e6dfe5 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -224,6 +224,48 @@ int vcpu_vgic_free(struct vcpu *v)
>      return 0;
>  }
>
> +/**
> + * vgic_lock_vcpu_irq(): lock both the pending_irq and the corresponding VCPU
> + *
> + * @v: the VCPU (for private IRQs)
> + * @p: pointer to the locked struct pending_irq
> + * @flags: pointer to the IRQ flags used when locking the VCPU
> + *
> + * The function takes a locked IRQ and returns with both the IRQ and the
> + * corresponding VCPU locked. This is non-trivial due to the locking order
> + * being actually the other way round (VCPU first, then IRQ).
> + *
> + * Returns: pointer to the VCPU this IRQ is targeting.
> + */
> +struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
> +                                unsigned long *flags)

The prototype for this function is missing.

> +{
> +    struct vcpu *target_vcpu;
> +
> +    ASSERT(spin_is_locked(&p->lock));
> +
> +    target_vcpu = vgic_get_target_vcpu(v, p);
> +    spin_unlock(&p->lock);
> +
> +    do
> +    {
> +        struct vcpu *current_vcpu;
> +
> +        spin_lock_irqsave(&target_vcpu->arch.vgic.lock, *flags);
> +        spin_lock(&p->lock);
> +
> +        current_vcpu = vgic_get_target_vcpu(v, p);
> +
> +        if ( target_vcpu->vcpu_id == current_vcpu->vcpu_id )
> +            return target_vcpu;
> +
> +        spin_unlock(&p->lock);
> +        spin_unlock_irqrestore(&target_vcpu->arch.vgic.lock, *flags);
> +
> +        target_vcpu = current_vcpu;
> +    } while (1);
> +}
> +
>  struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
>  {
>      struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq
  2017-07-21 20:00 ` [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq Andre Przywara
@ 2017-08-16 13:40   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 13:40 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> The VCPU a shared virtual IRQ is targeting is currently stored in the
> irq_rank structure.
> For LPIs we already store the target VCPU in struct pending_irq, so
> move SPIs over as well.
> The ITS code, which was using this field already, was so far using the
> VCPU lock to protect the pending_irq, so move this over to the new lock.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v2.c     | 56 +++++++++++++++--------------------
>  xen/arch/arm/vgic-v3-its.c |  9 +++---
>  xen/arch/arm/vgic-v3.c     | 69 ++++++++++++++++++++-----------------------
>  xen/arch/arm/vgic.c        | 73 +++++++++++++++++++++-------------------------
>  xen/include/asm-arm/vgic.h | 13 +++------
>  5 files changed, 96 insertions(+), 124 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 0c8a598..c7ed3ce 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -66,19 +66,22 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t csize,
>   *
>   * Note the byte offset will be aligned to an ITARGETSR<n> boundary.
>   */
> -static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank,
> -                                     unsigned int offset)
> +static uint32_t vgic_fetch_itargetsr(struct vcpu *v, unsigned int offset)
>  {
>      uint32_t reg = 0;
>      unsigned int i;
> +    unsigned long flags;
>
> -    ASSERT(spin_is_locked(&rank->lock));
> -
> -    offset &= INTERRUPT_RANK_MASK;
>      offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
>
>      for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ )
> -        reg |= (1 << read_atomic(&rank->vcpu[offset])) << (i * NR_BITS_PER_TARGET);
> +    {
> +        struct pending_irq *p = irq_to_pending(v, offset);
> +
> +        vgic_irq_lock(p, flags);
> +        reg |= (1 << p->vcpu_id) << (i * NR_BITS_PER_TARGET);
> +        vgic_irq_unlock(p, flags);
> +    }
>
>      return reg;
>  }
> @@ -89,32 +92,29 @@ static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank,
>   *
>   * Note the byte offset will be aligned to an ITARGETSR<n> boundary.
>   */
> -static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank,
> +static void vgic_store_itargetsr(struct domain *d,
>                                   unsigned int offset, uint32_t itargetsr)
>  {
>      unsigned int i;
>      unsigned int virq;
>
> -    ASSERT(spin_is_locked(&rank->lock));
> -
>      /*
>       * The ITARGETSR0-7, used for SGIs/PPIs, are implemented RO in the
>       * emulation and should never call this function.
>       *
> -     * They all live in the first rank.
> +     * They all live in the first four bytes of ITARGETSR.
>       */
> -    BUILD_BUG_ON(NR_INTERRUPT_PER_RANK != 32);
> -    ASSERT(rank->index >= 1);
> +    ASSERT(offset >= 4);
>
> -    offset &= INTERRUPT_RANK_MASK;
> +    virq = offset;
>      offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
>
> -    virq = rank->index * NR_INTERRUPT_PER_RANK + offset;
> -
>      for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++, virq++ )
>      {
>          unsigned int new_target, old_target;
> +        unsigned long flags;
>          uint8_t new_mask;
> +        struct pending_irq *p = spi_to_pending(d, virq);
>
>          /*
>           * Don't need to mask as we rely on new_mask to fit for only one
> @@ -151,16 +151,14 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank,
>          /* The vCPU ID always starts from 0 */
>          new_target--;
>
> -        old_target = read_atomic(&rank->vcpu[offset]);
> +        vgic_irq_lock(p, flags);
> +        old_target = p->vcpu_id;
>
>          /* Only migrate the vIRQ if the target vCPU has changed */
>          if ( new_target != old_target )
> -        {
> -            if ( vgic_migrate_irq(d->vcpu[old_target],
> -                             d->vcpu[new_target],
> -                             virq) )
> -                write_atomic(&rank->vcpu[offset], new_target);
> -        }
> +            vgic_migrate_irq(p, &flags, d->vcpu[new_target]);

Why do you need to pass a pointer on the flags and not directly the value?

> +        else
> +            vgic_irq_unlock(p, flags);
>      }
>  }
>
> @@ -264,11 +262,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>          uint32_t itargetsr;
>
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
> -        if ( rank == NULL) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        itargetsr = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
> -        vgic_unlock_rank(v, rank, flags);
> +        itargetsr = vgic_fetch_itargetsr(v, gicd_reg - GICD_ITARGETSR);

You need a check on the IRQ to avoid calling vgic_fetch_itargetsr with 
an IRQ not handled.

>          *r = vreg_reg32_extract(itargetsr, info);
>
>          return 1;
> @@ -498,14 +492,10 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>          uint32_t itargetsr;
>
>          if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
> -        if ( rank == NULL) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        itargetsr = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
> +        itargetsr = vgic_fetch_itargetsr(v, gicd_reg - GICD_ITARGETSR);

Ditto.

>          vreg_reg32_update(&itargetsr, r, info);
> -        vgic_store_itargetsr(v->domain, rank, gicd_reg - GICD_ITARGETSR,
> +        vgic_store_itargetsr(v->domain, gicd_reg - GICD_ITARGETSR,
>                               itargetsr);

The itargetsr is updated using read-modify-write and should be atomic. 
This was protected by the rank lock that you now dropped. So what would 
be the locking here?

> -        vgic_unlock_rank(v, rank, flags);
>          return 1;
>      }
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 682ce10..1020ebe 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -628,7 +628,7 @@ static int its_discard_event(struct virt_its *its,
>
>      /* Cleanup the pending_irq and disconnect it from the LPI. */
>      gic_remove_irq_from_queues(vcpu, p);
> -    vgic_init_pending_irq(p, INVALID_LPI);
> +    vgic_init_pending_irq(p, INVALID_LPI, INVALID_VCPU_ID);
>
>      spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
>
> @@ -768,7 +768,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>      if ( !pirq )
>          goto out_remove_mapping;
>
> -    vgic_init_pending_irq(pirq, intid);
> +    vgic_init_pending_irq(pirq, intid, vcpu->vcpu_id);
>
>      /*
>       * Now read the guest's property table to initialize our cached state.
> @@ -781,7 +781,6 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>      if ( ret )
>          goto out_remove_host_entry;
>
> -    pirq->vcpu_id = vcpu->vcpu_id;
>      /*
>       * Mark this LPI as new, so any older (now unmapped) LPI in any LR
>       * can be easily recognised as such.
> @@ -852,9 +851,9 @@ static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
>       */
>      spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
>
> +    vgic_irq_lock(p, flags);
>      p->vcpu_id = nvcpu->vcpu_id;
> -
> -    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
> +    vgic_irq_unlock(p, flags);
>
>      /*
>       * TODO: Investigate if and how to migrate an already pending LPI. This
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index e9e36eb..e9d46af 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -100,18 +100,21 @@ static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
>   *
>   * Note the byte offset will be aligned to an IROUTER<n> boundary.
>   */
> -static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank,
> -                                   unsigned int offset)
> +static uint64_t vgic_fetch_irouter(struct vcpu *v, unsigned int offset)
>  {
> -    ASSERT(spin_is_locked(&rank->lock));
> +    struct pending_irq *p;
> +    unsigned long flags;
> +    uint64_t aff;
>
>      /* There is exactly 1 vIRQ per IROUTER */
>      offset /= NR_BYTES_PER_IROUTER;
>
> -    /* Get the index in the rank */
> -    offset &= INTERRUPT_RANK_MASK;
> +    p = irq_to_pending(v, offset);
> +    vgic_irq_lock(p, flags);
> +    aff = vcpuid_to_vaffinity(p->vcpu_id);
> +    vgic_irq_unlock(p, flags);
>
> -    return vcpuid_to_vaffinity(read_atomic(&rank->vcpu[offset]));
> +    return aff;
>  }
>
>  /*
> @@ -120,10 +123,12 @@ static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank,
>   *
>   * Note the offset will be aligned to the appropriate boundary.
>   */
> -static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
> +static void vgic_store_irouter(struct domain *d,
>                                 unsigned int offset, uint64_t irouter)
>  {
> -    struct vcpu *new_vcpu, *old_vcpu;
> +    struct vcpu *new_vcpu;
> +    struct pending_irq *p;
> +    unsigned long flags;
>      unsigned int virq;
>
>      /* There is 1 vIRQ per IROUTER */
> @@ -135,11 +140,10 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
>       */
>      ASSERT(virq >= 32);
>
> -    /* Get the index in the rank */
> -    offset &= virq & INTERRUPT_RANK_MASK;
> +    p = spi_to_pending(d, virq);
> +    vgic_irq_lock(p, flags);
>
>      new_vcpu = vgic_v3_irouter_to_vcpu(d, irouter);
> -    old_vcpu = d->vcpu[read_atomic(&rank->vcpu[offset])];
>
>      /*
>       * From the spec (see 8.9.13 in IHI 0069A), any write with an
> @@ -149,16 +153,13 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
>       * invalid vCPU. So for now, just ignore the write.
>       *
>       * TODO: Respect the spec
> +     *
> +     * Only migrate the IRQ if the target vCPU has changed
>       */
> -    if ( !new_vcpu )
> -        return;
> -
> -    /* Only migrate the IRQ if the target vCPU has changed */
> -    if ( new_vcpu != old_vcpu )
> -    {
> -        if ( vgic_migrate_irq(old_vcpu, new_vcpu, virq) )
> -            write_atomic(&rank->vcpu[offset], new_vcpu->vcpu_id);
> -    }
> +    if ( new_vcpu && new_vcpu->vcpu_id != p->vcpu_id )
> +        vgic_migrate_irq(p, &flags, new_vcpu);
> +    else
> +        vgic_irq_unlock(p, flags);
>  }
>
>  static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
> @@ -1061,8 +1062,6 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>                                     register_t *r, void *priv)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
> -    unsigned long flags;
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
>
>      perfc_incr(vgicd_reads);
> @@ -1190,15 +1189,12 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>      case VRANGE64(GICD_IROUTER32, GICD_IROUTER1019):
>      {
>          uint64_t irouter;
> +        unsigned int irq;
>
>          if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
> -        rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
> -                                DABT_DOUBLE_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        irouter = vgic_fetch_irouter(rank, gicd_reg - GICD_IROUTER);
> -        vgic_unlock_rank(v, rank, flags);
> -
> +        irq = (gicd_reg - GICD_IROUTER) / 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        irouter = vgic_fetch_irouter(v, gicd_reg - GICD_IROUTER);
>          *r = vreg_reg64_extract(irouter, info);
>
>          return 1;
> @@ -1264,8 +1260,6 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>                                      register_t r, void *priv)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
> -    unsigned long flags;
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
>
>      perfc_incr(vgicd_writes);
> @@ -1379,16 +1373,15 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>      case VRANGE64(GICD_IROUTER32, GICD_IROUTER1019):
>      {
>          uint64_t irouter;
> +        unsigned int irq;
>
>          if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
> -        rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
> -                                DABT_DOUBLE_WORD);
> -        if ( rank == NULL ) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        irouter = vgic_fetch_irouter(rank, gicd_reg - GICD_IROUTER);
> +        irq = (gicd_reg - GICD_IROUTER) / 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +
> +        irouter = vgic_fetch_irouter(v, gicd_reg - GICD_IROUTER);
>          vreg_reg64_update(&irouter, r, info);
> -        vgic_store_irouter(v->domain, rank, gicd_reg - GICD_IROUTER, irouter);
> -        vgic_unlock_rank(v, rank, flags);
> +        vgic_store_irouter(v->domain, gicd_reg - GICD_IROUTER, irouter);

Same here for the locking issue.

>          return 1;
>      }
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 0e6dfe5..f6532ee 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -61,7 +61,8 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
>      return vgic_get_rank(v, rank);
>  }
>
> -void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
> +void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
> +                           unsigned int vcpu_id)
>  {
>      /* The vcpu_id field must be big enough to hold a VCPU ID. */
>      BUILD_BUG_ON(BIT(sizeof(p->vcpu_id) * 8) < MAX_VIRT_CPUS);
> @@ -71,27 +72,15 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>      INIT_LIST_HEAD(&p->lr_queue);
>      spin_lock_init(&p->lock);
>      p->irq = virq;
> -    p->vcpu_id = INVALID_VCPU_ID;
> +    p->vcpu_id = vcpu_id;
>  }
>
>  static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
>                             unsigned int vcpu)
>  {
> -    unsigned int i;
> -
> -    /*
> -     * Make sure that the type chosen to store the target is able to
> -     * store an VCPU ID between 0 and the maximum of virtual CPUs
> -     * supported.
> -     */
> -    BUILD_BUG_ON((1 << (sizeof(rank->vcpu[0]) * 8)) < MAX_VIRT_CPUS);
> -
>      spin_lock_init(&rank->lock);
>
>      rank->index = index;
> -
> -    for ( i = 0; i < NR_INTERRUPT_PER_RANK; i++ )
> -        write_atomic(&rank->vcpu[i], vcpu);
>  }
>
>  int domain_vgic_register(struct domain *d, int *mmio_count)
> @@ -142,9 +131,9 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
>      if ( d->arch.vgic.pending_irqs == NULL )
>          return -ENOMEM;
>
> +    /* SPIs are routed to VCPU0 by default */
>      for (i=0; i<d->arch.vgic.nr_spis; i++)
> -        vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32);
> -
> +        vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32, 0);
>      /* SPIs are routed to VCPU0 by default */
>      for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
>          vgic_rank_init(&d->arch.vgic.shared_irqs[i], i + 1, 0);
> @@ -208,8 +197,9 @@ int vcpu_vgic_init(struct vcpu *v)
>      v->domain->arch.vgic.handler->vcpu_init(v);
>
>      memset(&v->arch.vgic.pending_irqs, 0, sizeof(v->arch.vgic.pending_irqs));
> +    /* SGIs/PPIs are always routed to this VCPU */
>      for (i = 0; i < 32; i++)
> -        vgic_init_pending_irq(&v->arch.vgic.pending_irqs[i], i);
> +        vgic_init_pending_irq(&v->arch.vgic.pending_irqs[i], i, v->vcpu_id);
>
>      INIT_LIST_HEAD(&v->arch.vgic.inflight_irqs);
>      INIT_LIST_HEAD(&v->arch.vgic.lr_pending);
> @@ -268,10 +258,7 @@ struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
>
>  struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
>  {
> -    struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
> -    int target = read_atomic(&rank->vcpu[p->irq & INTERRUPT_RANK_MASK]);
> -
> -    return v->domain->vcpu[target];
> +    return v->domain->vcpu[p->vcpu_id];

Do you need p to be locked for reading vcpu_id? If so, then an ASSERT 
should be added. If not, then maybe you need an ACCESS_ONCE/read-atomic.

>  }
>
>  #define MAX_IRQS_PER_IPRIORITYR 4
> @@ -360,57 +347,65 @@ void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
>      local_irq_restore(flags);
>  }
>
> -bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
> +bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags,
> +                      struct vcpu *new)
>  {
> -    unsigned long flags;
> -    struct pending_irq *p;
> +    unsigned long vcpu_flags;
> +    struct vcpu *old;
> +    bool ret = false;
>
>      /* This will never be called for an LPI, as we don't migrate them. */
> -    ASSERT(!is_lpi(irq));
> +    ASSERT(!is_lpi(p->irq));
>
> -    spin_lock_irqsave(&old->arch.vgic.lock, flags);
> -
> -    p = irq_to_pending(old, irq);
> +    ASSERT(spin_is_locked(&p->lock));
>
>      /* nothing to do for virtual interrupts */
>      if ( p->desc == NULL )
>      {
> -        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> -        return true;
> +        ret = true;
> +        goto out_unlock;
>      }
>
>      /* migration already in progress, no need to do anything */
>      if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
>      {
> -        gprintk(XENLOG_WARNING, "irq %u migration failed: requested while in progress\n", irq);
> -        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> -        return false;
> +        gprintk(XENLOG_WARNING, "irq %u migration failed: requested while in progress\n", p->irq);
> +        goto out_unlock;
>      }
>
> +    p->vcpu_id = new->vcpu_id;

Something is wrong here. You update p->vcpu_id quite early. This means 
if the IRQ fire whilst you are in vgic_migrate_irq, then you will call 
use the new vCPU in vgic_vcpu_inject_irq but potential still in the old 
list.

> +
>      perfc_incr(vgic_irq_migrates);
>
>      if ( list_empty(&p->inflight) )

I was kind of expecting the old vCPU lock to be taken given that you 
check p->inflight.

>      {
>          irq_set_affinity(p->desc, cpumask_of(new->processor));
> -        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> -        return true;
> +        goto out_unlock;
>      }
> +
>      /* If the IRQ is still lr_pending, re-inject it to the new vcpu */
>      if ( !list_empty(&p->lr_queue) )
>      {
> +        old = vgic_lock_vcpu_irq(new, p, &vcpu_flags);

I may miss something here. The vCPU returned should be new, not old, right?

>          gic_remove_irq_from_queues(old, p);
>          irq_set_affinity(p->desc, cpumask_of(new->processor));
> -        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> -        vgic_vcpu_inject_irq(new, irq);
> +
> +        vgic_irq_unlock(p, *flags);
> +        spin_unlock_irqrestore(&old->arch.vgic.lock, vcpu_flags);
> +
> +        vgic_vcpu_inject_irq(new, p->irq);
>          return true;
>      }
> +
>      /* if the IRQ is in a GICH_LR register, set GIC_IRQ_GUEST_MIGRATING
>       * and wait for the EOI */
>      if ( !list_empty(&p->inflight) )
>          set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status);
>
> -    spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> -    return true;
> +out_unlock:
> +    vgic_irq_unlock(p, *flags);
> +
> +    return false;
>  }
>
>  void arch_move_irqs(struct vcpu *v)
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index ffd9a95..4b47a9b 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -112,13 +112,6 @@ struct vgic_irq_rank {
>
>      uint32_t ienable;
>
> -    /*
> -     * It's more convenient to store a target VCPU per vIRQ
> -     * than the register ITARGETSR/IROUTER itself.
> -     * Use atomic operations to read/write the vcpu fields to avoid
> -     * taking the rank lock.
> -     */
> -    uint8_t vcpu[32];
>  };
>
>  struct sgi_target {
> @@ -217,7 +210,8 @@ extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p);
>  extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
>  extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
>  extern void vgic_clear_pending_irqs(struct vcpu *v);
> -extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq);
> +extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq,
> +                                  unsigned int vcpu_id);
>  extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
>  extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
>  extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
> @@ -237,7 +231,8 @@ extern int vcpu_vgic_free(struct vcpu *v);
>  extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
>                          enum gic_sgi_mode irqmode, int virq,
>                          const struct sgi_target *target);
> -extern bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq);
> +extern bool vgic_migrate_irq(struct pending_irq *p,
> +                             unsigned long *flags, struct vcpu *new);
>
>  /* Reserve a specific guest vIRQ */
>  extern bool vgic_reserve_virq(struct domain *d, unsigned int virq);
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq
  2017-07-21 20:00 ` [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq Andre Przywara
@ 2017-08-16 14:32   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 14:32 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> The enabled bits for a group of IRQs are still stored in the irq_rank
> structure, although we already have the same information in pending_irq,
> in the GIC_IRQ_GUEST_ENABLED bit of the "status" field.
> Remove the storage from the irq_rank and just utilize the existing
> wrappers to cover enabling/disabling of multiple IRQs.
> This also marks the removal of the last member of struct vgic_irq_rank.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v2.c     |  41 +++------
>  xen/arch/arm/vgic-v3.c     |  41 +++------
>  xen/arch/arm/vgic.c        | 201 +++++++++++++++++++++++++++------------------
>  xen/include/asm-arm/vgic.h |  10 +--
>  4 files changed, 152 insertions(+), 141 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index c7ed3ce..3320642 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -166,9 +166,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>                                     register_t *r, void *priv)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
> -    unsigned long flags;
>      unsigned int irq;
>
>      perfc_incr(vgicd_reads);
> @@ -222,20 +220,16 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>
>      case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD);
> -        if ( rank == NULL) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        *r = vreg_reg32_extract(rank->ienable, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (gicd_reg - GICD_ISENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        *r = vgic_fetch_irq_enabled(v, irq);
>          return 1;
>
>      case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD);
> -        if ( rank == NULL) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        *r = vreg_reg32_extract(rank->ienable, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (gicd_reg - GICD_ICENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        *r = vgic_fetch_irq_enabled(v, irq);
>          return 1;
>
>      /* Read the pending status of an IRQ via GICD is not supported */
> @@ -386,10 +380,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>                                      register_t r, void *priv)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
>      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
> -    uint32_t tr;
> -    unsigned long flags;
>      unsigned int irq;
>
>      perfc_incr(vgicd_writes);
> @@ -426,24 +417,16 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info,
>
>      case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD);
> -        if ( rank == NULL) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        tr = rank->ienable;
> -        vreg_reg32_setbits(&rank->ienable, r, info);
> -        vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (gicd_reg - GICD_ISENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +        vgic_store_irq_enable(v, irq, r);
>          return 1;
>
>      case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD);
> -        if ( rank == NULL) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        tr = rank->ienable;
> -        vreg_reg32_clearbits(&rank->ienable, r, info);
> -        vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (gicd_reg - GICD_ICENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +        vgic_store_irq_disable(v, irq, r);
>          return 1;
>
>      case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index e9d46af..00cc1e5 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -676,8 +676,6 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
>                                              register_t *r)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
> -    unsigned long flags;
>      unsigned int irq;
>
>      switch ( reg )
> @@ -689,20 +687,16 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v,
>
>      case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        *r = vreg_reg32_extract(rank->ienable, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (reg - GICD_ISENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        *r = vgic_fetch_irq_enabled(v, irq);
>          return 1;
>
>      case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD);
> -        if ( rank == NULL ) goto read_as_zero;
> -        vgic_lock_rank(v, rank, flags);
> -        *r = vreg_reg32_extract(rank->ienable, info);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (reg - GICD_ICENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero;
> +        *r = vgic_fetch_irq_enabled(v, irq);
>          return 1;
>
>      /* Read the pending status of an IRQ via GICD/GICR is not supported */
> @@ -752,9 +746,6 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
>                                               register_t r)
>  {
>      struct hsr_dabt dabt = info->dabt;
> -    struct vgic_irq_rank *rank;
> -    uint32_t tr;
> -    unsigned long flags;
>      unsigned int irq;
>
>      switch ( reg )
> @@ -765,24 +756,16 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v,
>
>      case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD);
> -        if ( rank == NULL ) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        tr = rank->ienable;
> -        vreg_reg32_setbits(&rank->ienable, r, info);
> -        vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (reg - GICD_ISENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +        vgic_store_irq_enable(v, irq, r);
>          return 1;
>
>      case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> -        rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD);
> -        if ( rank == NULL ) goto write_ignore;
> -        vgic_lock_rank(v, rank, flags);
> -        tr = rank->ienable;
> -        vreg_reg32_clearbits(&rank->ienable, r, info);
> -        vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index);
> -        vgic_unlock_rank(v, rank, flags);
> +        irq = (reg - GICD_ICENABLER) * 8;
> +        if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore;
> +        vgic_store_irq_disable(v, irq, r);
>          return 1;
>
>      case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN):
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index a49fcde..dd969e2 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -261,6 +261,60 @@ struct vcpu *vgic_get_target_vcpu(struct domain *d, struct pending_irq *p)
>      return d->vcpu[p->vcpu_id];
>  }
>
> +/* Takes a locked pending_irq and enables the interrupt, also unlocking it. */
> +static void vgic_enable_irq_unlock(struct domain *d, struct pending_irq *p)
> +{
> +    struct vcpu *v_target;
> +    unsigned long flags;
> +    struct irq_desc *desc;
> +
> +    v_target = vgic_lock_vcpu_irq(d, p, &flags);
> +
> +    clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +    gic_remove_from_lr_pending(v_target, p);
> +    desc = p->desc;
> +    spin_unlock(&p->lock);
> +    spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);

I can see a potential issue with unlocking the pending irq here. You may 
end up to have have wrong state in the hardware compare to the software. 
The hardware and software state should be updated within the same lock 
to prevent that.

> +
> +    if ( desc != NULL )
> +    {
> +        spin_lock_irqsave(&desc->lock, flags);
> +        desc->handler->disable(desc);

Something is wrong. This is code seems to be for disabling IRQ and...

> +        spin_unlock_irqrestore(&desc->lock, flags);
> +    }
> +}
> +
> +/* Takes a locked pending_irq and disables the interrupt, also unlocking it. */
> +static void vgic_disable_irq_unlock(struct domain *d, struct pending_irq *p)
> +{
> +    struct vcpu *v_target;
> +    unsigned long flags;
> +    struct irq_desc *desc;
> +    int int_type;
> +
> +    v_target = vgic_lock_vcpu_irq(d, p, &flags);
> +
> +    set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +    int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ? IRQ_TYPE_LEVEL_HIGH :
> +                                                           IRQ_TYPE_EDGE_RISING;
> +    if ( !list_empty(&p->inflight) &&
> +         !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> +        gic_raise_guest_irq(v_target, p->irq, p->cur_priority);
> +    desc = p->desc;
> +    spin_unlock(&p->lock);
> +    spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> +
> +    if ( desc != NULL )
> +    {
> +        spin_lock_irqsave(&desc->lock, flags);
> +        irq_set_affinity(desc, cpumask_of(v_target->processor));
> +        if ( irq_type_set_by_domain(d) )
> +            gic_set_irq_type(desc, int_type);
> +        desc->handler->enable(desc);

... this one for enabling IRQ. But the names are inverted.

> +        spin_unlock_irqrestore(&desc->lock, flags);
> +    }
> +}
> +
>  #define MAX_IRQS_PER_IPRIORITYR 4
>  uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>                                   unsigned int first_irq)
> @@ -347,6 +401,75 @@ void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
>      local_irq_restore(flags);
>  }
>
> +#define IRQS_PER_ENABLER        32
> +/**

A single * is enough here.

> + * vgic_fetch_irq_enabled: assemble the enabled bits for a group of 32 IRQs
> + * @v: the VCPU for private IRQs, any VCPU of a domain for SPIs
> + * @first_irq: the first IRQ to be queried, must be aligned to 32
> + */
> +uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq)
> +{
> +    struct pending_irq *pirqs[IRQS_PER_ENABLER];
> +    unsigned long flags;
> +    uint32_t reg = 0;
> +    unsigned int i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
> +
> +    for ( i = 0; i < 32; i++ )
> +        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) )
> +            reg |= BIT(i);
> +
> +    vgic_unlock_irqs(pirqs, IRQS_PER_ENABLER);
> +    local_irq_restore(flags);
> +
> +    return reg;
> +}
> +
> +void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq,
> +                           uint32_t value)
> +{
> +    struct pending_irq *pirqs[IRQS_PER_ENABLER];
> +    unsigned long flags;
> +    int i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
> +
> +    /* This goes backwards, as it unlocks the IRQs during the process */

Missing full stop.

> +    for ( i = IRQS_PER_ENABLER - 1; i >= 0; i-- )
> +    {
> +        if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) &&
> +             (value & BIT(i)) )
> +            vgic_enable_irq_unlock(v->domain, pirqs[i]);
> +        else
> +            spin_unlock(&pirqs[i]->lock);
> +    }
> +    local_irq_restore(flags);
> +}
> +
> +void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq,
> +                            uint32_t value)
> +{
> +    struct pending_irq *pirqs[IRQS_PER_ENABLER];
> +    unsigned long flags;
> +    int i;
> +
> +    local_irq_save(flags);
> +    vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs);
> +
> +    /* This goes backwards, as it unlocks the IRQs during the process */

Sitto.

> +    for ( i = 31; i >= 0; i-- )

Please use define rather than hardcoded value.

> +    {
> +        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) &&
> +             (value & BIT(i)) )
> +            vgic_disable_irq_unlock(v->domain, pirqs[i]);
> +        else
> +            spin_unlock(&pirqs[i]->lock);
> +    }
> +}
> +
>  bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags,
>                        struct vcpu *new)
>  {
> @@ -437,40 +560,6 @@ void arch_move_irqs(struct vcpu *v)
>      }
>  }
>
> -void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> -{
> -    const unsigned long mask = r;
> -    struct pending_irq *p;
> -    struct irq_desc *desc;
> -    unsigned int irq;
> -    unsigned long flags;
> -    int i = 0;
> -    struct vcpu *v_target;
> -
> -    /* LPIs will never be disabled via this function. */
> -    ASSERT(!is_lpi(32 * n + 31));
> -
> -    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> -        irq = i + (32 * n);
> -        p = irq_to_pending(v, irq);
> -        v_target = vgic_get_target_vcpu(v->domain, p);
> -
> -        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> -        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        gic_remove_from_lr_pending(v_target, p);
> -        desc = p->desc;
> -        spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> -
> -        if ( desc != NULL )
> -        {
> -            spin_lock_irqsave(&desc->lock, flags);
> -            desc->handler->disable(desc);
> -            spin_unlock_irqrestore(&desc->lock, flags);
> -        }
> -        i++;
> -    }
> -}
> -
>  void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs,
>                      unsigned int first_irq, struct pending_irq **pirqs)
>  {
> @@ -491,50 +580,6 @@ void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs)
>          spin_unlock(&pirqs[i]->lock);
>  }
>
> -void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> -{
> -    const unsigned long mask = r;
> -    struct pending_irq *p;
> -    unsigned int irq, int_type;
> -    unsigned long flags, vcpu_flags;
> -    int i = 0;
> -    struct vcpu *v_target;
> -    struct domain *d = v->domain;
> -
> -    /* LPIs will never be enabled via this function. */
> -    ASSERT(!is_lpi(32 * n + 31));
> -
> -    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> -        irq = i + (32 * n);
> -        p = irq_to_pending(v, irq);
> -        v_target = vgic_get_target_vcpu(v->domain, p);
> -        spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags);
> -        vgic_irq_lock(p, flags);
> -        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ?
> -                            IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING;
> -        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> -            gic_raise_guest_irq(v_target, irq, p->cur_priority);
> -        vgic_irq_unlock(p, flags);
> -        spin_unlock_irqrestore(&v_target->arch.vgic.lock, vcpu_flags);
> -        if ( p->desc != NULL )
> -        {
> -            spin_lock_irqsave(&p->desc->lock, flags);
> -            irq_set_affinity(p->desc, cpumask_of(v_target->processor));
> -            /*
> -             * The irq cannot be a PPI, we only support delivery of SPIs
> -             * to guests.
> -             */
> -            ASSERT(irq >= 32);
> -            if ( irq_type_set_by_domain(d) )
> -                gic_set_irq_type(p->desc, int_type);
> -            p->desc->handler->enable(p->desc);
> -            spin_unlock_irqrestore(&p->desc->lock, flags);
> -        }
> -        i++;
> -    }
> -}
> -
>  bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
>                   int virq, const struct sgi_target *target)
>  {
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index fe4d53d..233ff1f 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -109,9 +109,6 @@ struct vgic_irq_rank {
>      spinlock_t lock; /* Covers access to all other members of this struct */
>
>      uint8_t index;
> -
> -    uint32_t ienable;
> -
>  };
>
>  struct sgi_target {
> @@ -187,6 +184,11 @@ void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>  uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq);
>  void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq,
>                             uint32_t reg);
> +uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq);
> +void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq,
> +                           uint32_t value);
> +void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq,
> +                            uint32_t value);
>
>  enum gic_sgi_mode;
>
> @@ -218,8 +220,6 @@ extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
>  extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
>  extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
>  extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr);
> -extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
> -extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
>  extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
>  int vgic_v2_init(struct domain *d, int *mmio_count);
>  int vgic_v3_init(struct domain *d, int *mmio_count);
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock
  2017-07-21 20:00 ` [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock Andre Przywara
@ 2017-08-16 14:38   ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 14:38 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel

Hi Andre,

On 21/07/17 21:00, Andre Przywara wrote:
> Instead of using an atomic access and hoping for the best, let's use
> the new pending_irq lock now to make sure we read a sane version of
> the target VCPU.

How this is going to bring a saner version?

You only read the vCPU and well nothing prevent it to change between the 
time you get it and lock it in vgic_vcpu_inject_irq.

Cheers,

> That still doesn't solve the problem mentioned in the comment, but
> paves the way for future improvements.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v3-lpi.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index 2306b58..9db26ed 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -140,20 +140,22 @@ void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq)
>  {
>      /*
>       * TODO: this assumes that the struct pending_irq stays valid all of
> -     * the time. We cannot properly protect this with the current locking
> -     * scheme, but the future per-IRQ lock will solve this problem.
> +     * the time. We cannot properly protect this with the current code,
> +     * but a future refcounting will solve this problem.
>       */
>      struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
> +    unsigned long flags;
>      unsigned int vcpu_id;
>
>      if ( !p )
>          return;
>
> -    vcpu_id = ACCESS_ONCE(p->vcpu_id);
> -    if ( vcpu_id >= d->max_vcpus )
> -          return;
> +    vgic_irq_lock(p, flags);
> +    vcpu_id = p->vcpu_id;
> +    vgic_irq_unlock(p, flags);
>
> -    vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
> +    if ( vcpu_id < d->max_vcpus )
> +        vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
>  }
>
>  /*
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock
  2017-08-10 15:35   ` Julien Grall
@ 2017-08-16 16:27     ` Andre Przywara
  2017-08-16 16:35       ` Julien Grall
  0 siblings, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-08-16 16:27 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Hi,

On 10/08/17 16:35, Julien Grall wrote:
> Hi,
> 
> On 21/07/17 20:59, Andre Przywara wrote:
>> Currently we protect the pending_irq structure with the corresponding
>> VGIC VCPU lock. There are problems in certain corner cases (for
>> instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
>> which will protect the consistency of this structure independent from
>> any VCPU.
>> For now this just introduces and initializes the lock, also adds
>> wrapper macros to simplify its usage (and help debugging).
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic.c        |  1 +
>>  xen/include/asm-arm/vgic.h | 11 +++++++++++
>>  2 files changed, 12 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 1e5107b..38dacd3 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p,
>> unsigned int virq)
>>      memset(p, 0, sizeof(*p));
>>      INIT_LIST_HEAD(&p->inflight);
>>      INIT_LIST_HEAD(&p->lr_queue);
>> +    spin_lock_init(&p->lock);
>>      p->irq = virq;
>>      p->lpi_vcpu_id = INVALID_VCPU_ID;
>>  }
>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>> index d4ed23d..1c38b9a 100644
>> --- a/xen/include/asm-arm/vgic.h
>> +++ b/xen/include/asm-arm/vgic.h
>> @@ -90,6 +90,14 @@ struct pending_irq
>>       * TODO: when implementing irq migration, taking only the current
>>       * vgic lock is not going to be enough. */
>>      struct list_head lr_queue;
>> +    /* The lock protects the consistency of this structure. A single
>> status bit
>> +     * can be read and/or set without holding the lock using the atomic
>> +     * set_bit/clear_bit/test_bit functions, however accessing
>> multiple bits or
>> +     * relating to other members in this struct requires the lock.
>> +     * The list_head members are protected by their corresponding
>> VCPU lock,
>> +     * it is not sufficient to hold this pending_irq lock here to
>> query or
>> +     * change list order or affiliation. */
> 
> Actually, I have on question here. Do the vCPU lock sufficient to
> protect the list_head members. Or do you also mandate the pending_irq to
> be locked as well?

For *manipulating* a list (removing or adding a pending_irq) you need to
hold both locks. We need the VCPU lock as the list head in struct vcpu
could change, and we need the per-IRQ lock to prevent a pending_irq to
be inserted into two lists at the same time (and also the list_head
member variables are changed).
However just *checking* whether a certain pending_irq is a member of a
list works with just holding the per-IRQ lock.

> Also, it would be good to have the locking order documented maybe in
> docs/misc?

Yes, I agree having a high level VGIC document (focussing on the locking
for the beginning) is a good idea.

Cheers,
Andre.

> 
>> +    spinlock_t lock;
>>  };
>>
>>  #define NR_INTERRUPT_PER_RANK   32
>> @@ -156,6 +164,9 @@ struct vgic_ops {
>>  #define vgic_lock(v)   spin_lock_irq(&(v)->domain->arch.vgic.lock)
>>  #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock)
>>
>> +#define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags)
>> +#define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock,
>> flags)
>> +
>>  #define vgic_lock_rank(v, r, flags)   spin_lock_irqsave(&(r)->lock,
>> flags)
>>  #define vgic_unlock_rank(v, r, flags)
>> spin_unlock_irqrestore(&(r)->lock, flags)
>>
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock
  2017-08-16 16:27     ` Andre Przywara
@ 2017-08-16 16:35       ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 16:35 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel



On 16/08/17 17:27, Andre Przywara wrote:
> Hi,
>
> On 10/08/17 16:35, Julien Grall wrote:
>> Hi,
>>
>> On 21/07/17 20:59, Andre Przywara wrote:
>>> Currently we protect the pending_irq structure with the corresponding
>>> VGIC VCPU lock. There are problems in certain corner cases (for
>>> instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
>>> which will protect the consistency of this structure independent from
>>> any VCPU.
>>> For now this just introduces and initializes the lock, also adds
>>> wrapper macros to simplify its usage (and help debugging).
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic.c        |  1 +
>>>  xen/include/asm-arm/vgic.h | 11 +++++++++++
>>>  2 files changed, 12 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 1e5107b..38dacd3 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p,
>>> unsigned int virq)
>>>      memset(p, 0, sizeof(*p));
>>>      INIT_LIST_HEAD(&p->inflight);
>>>      INIT_LIST_HEAD(&p->lr_queue);
>>> +    spin_lock_init(&p->lock);
>>>      p->irq = virq;
>>>      p->lpi_vcpu_id = INVALID_VCPU_ID;
>>>  }
>>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>>> index d4ed23d..1c38b9a 100644
>>> --- a/xen/include/asm-arm/vgic.h
>>> +++ b/xen/include/asm-arm/vgic.h
>>> @@ -90,6 +90,14 @@ struct pending_irq
>>>       * TODO: when implementing irq migration, taking only the current
>>>       * vgic lock is not going to be enough. */
>>>      struct list_head lr_queue;
>>> +    /* The lock protects the consistency of this structure. A single
>>> status bit
>>> +     * can be read and/or set without holding the lock using the atomic
>>> +     * set_bit/clear_bit/test_bit functions, however accessing
>>> multiple bits or
>>> +     * relating to other members in this struct requires the lock.
>>> +     * The list_head members are protected by their corresponding
>>> VCPU lock,
>>> +     * it is not sufficient to hold this pending_irq lock here to
>>> query or
>>> +     * change list order or affiliation. */
>>
>> Actually, I have on question here. Do the vCPU lock sufficient to
>> protect the list_head members. Or do you also mandate the pending_irq to
>> be locked as well?
>
> For *manipulating* a list (removing or adding a pending_irq) you need to
> hold both locks. We need the VCPU lock as the list head in struct vcpu
> could change, and we need the per-IRQ lock to prevent a pending_irq to
> be inserted into two lists at the same time (and also the list_head
> member variables are changed).
> However just *checking* whether a certain pending_irq is a member of a
> list works with just holding the per-IRQ lock.

This does not seem to be inlined with the description above. It says "It 
is not sufficient to hold this pending_irq lock here to query...".

Also, there are a few places not taking both lock when updating the 
list. This is at least the case of:
	- vgic_clear_pending_irqs
	- gic_clear_pending_irqs
	- its_discard_event

So something has to be done to be the code inlined with the description.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-08-11 14:10   ` Julien Grall
@ 2017-08-16 16:48     ` Andre Przywara
  2017-08-16 16:58       ` Julien Grall
  2017-08-17 17:06     ` Andre Przywara
  1 sibling, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-08-16 16:48 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Hi,

On 11/08/17 15:10, Julien Grall wrote:
> Hi Andre,
> 
> On 21/07/17 20:59, Andre Przywara wrote:
>> Since the GICs MMIO access always covers a number of IRQs at once,
>> introduce wrapper functions which loop over those IRQs, take their
>> locks and read or update the priority values.
>> This will be used in a later patch.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/vgic.h |  5 +++++
>>  2 files changed, 42 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 434b7e2..b2c9632 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v,
>> unsigned int virq)
>>      return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
>>  }
>>
>> +#define MAX_IRQS_PER_IPRIORITYR 4
> 
> The name gives the impression that you may have IPRIORITYR with only 1
> IRQ. But this is not true. The registers is always 4. However, you are
> able to access using byte or word.
> 
>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
> 
> I am well aware that the vgic code is mixing between virq and irq.
> Moving forward, we should use virq to avoid confusion.
> 
>> +                                 unsigned int first_irq)
> 
> Please stay consistent, with the naming. Either nr_irqs/first_irq or
> nrirqs/firstirq. But not a mix.

I totally agree, but check this out:
xen/include/asm-arm/irq.h:#define nr_irqs NR_IRQS

So wherever you write nr_irqs in *any* part of ARM IRQ code you end up
with a compile error ...
Not easy to fix, though, hence I moved to the name without the
underscore, even though I don't really like it.

Cheers,
Andre.

> 
> Also, it makes more sense to describe first the start then number.
> 
>> +{
>> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
>> +    unsigned long flags;
>> +    uint32_t ret = 0, i;
>> +
>> +    local_irq_save(flags);
>> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
> 
> I am not convinced on the usefulness of taking all the locks in one go.
> At one point in the time, you only need to lock a given pending_irq.
> 
>> +
>> +    for ( i = 0; i < nrirqs; i++ )
>> +        ret |= pirqs[i]->priority << (i * 8);
> 
> Please avoid open-coding number.
> 
>> +
>> +    vgic_unlock_irqs(pirqs, nrirqs);
>> +    local_irq_restore(flags);
>> +
>> +    return ret;
>> +}
>> +
>> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                             unsigned int first_irq, uint32_t value)
>> +{
>> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
>> +    unsigned long flags;
>> +    unsigned int i;
>> +
>> +    local_irq_save(flags);
>> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
>> +
>> +    for ( i = 0; i < nrirqs; i++, value >>= 8 )
> 
> Same here.
> 
>> +        pirqs[i]->priority = value & 0xff;
>> +
>> +    vgic_unlock_irqs(pirqs, nrirqs);
>> +    local_irq_restore(flags);
>> +}
>> +
>>  bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned
>> int irq)
>>  {
>>      unsigned long flags;
>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>> index ecf4969..f3791c8 100644
>> --- a/xen/include/asm-arm/vgic.h
>> +++ b/xen/include/asm-arm/vgic.h
>> @@ -198,6 +198,11 @@ void vgic_lock_irqs(struct vcpu *v, unsigned int
>> nrirqs, unsigned int first_irq,
>>                      struct pending_irq **pirqs);
>>  void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
>>
>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                                 unsigned int first_irq);
>> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                             unsigned int first_irq, uint32_t reg);
>> +
>>  enum gic_sgi_mode;
>>
>>  /*
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-08-16 16:48     ` Andre Przywara
@ 2017-08-16 16:58       ` Julien Grall
  0 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2017-08-16 16:58 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel



On 16/08/17 17:48, Andre Przywara wrote:
> Hi,
>
> On 11/08/17 15:10, Julien Grall wrote:
>> Hi Andre,
>>
>> On 21/07/17 20:59, Andre Przywara wrote:
>>> Since the GICs MMIO access always covers a number of IRQs at once,
>>> introduce wrapper functions which loop over those IRQs, take their
>>> locks and read or update the priority values.
>>> This will be used in a later patch.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
>>>  xen/include/asm-arm/vgic.h |  5 +++++
>>>  2 files changed, 42 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 434b7e2..b2c9632 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v,
>>> unsigned int virq)
>>>      return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
>>>  }
>>>
>>> +#define MAX_IRQS_PER_IPRIORITYR 4
>>
>> The name gives the impression that you may have IPRIORITYR with only 1
>> IRQ. But this is not true. The registers is always 4. However, you are
>> able to access using byte or word.
>>
>>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>>
>> I am well aware that the vgic code is mixing between virq and irq.
>> Moving forward, we should use virq to avoid confusion.
>>
>>> +                                 unsigned int first_irq)
>>
>> Please stay consistent, with the naming. Either nr_irqs/first_irq or
>> nrirqs/firstirq. But not a mix.
>
> I totally agree, but check this out:
> xen/include/asm-arm/irq.h:#define nr_irqs NR_IRQS
>
> So wherever you write nr_irqs in *any* part of ARM IRQ code you end up
> with a compile error ...
> Not easy to fix, though, hence I moved to the name without the
> underscore, even though I don't really like it.

Oh. On a side note, nr_irqs does not cover all the IRQs. It only covers 
up to SPIs. Which is a little bit odd.

Anyway, maybe you would rename it to nr. I think it is fairly straight 
forward that you deal with IRQ.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-08-11 14:10   ` Julien Grall
  2017-08-16 16:48     ` Andre Przywara
@ 2017-08-17 17:06     ` Andre Przywara
  2017-08-18 14:21       ` Julien Grall
  1 sibling, 1 reply; 45+ messages in thread
From: Andre Przywara @ 2017-08-17 17:06 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Hi,

On 11/08/17 15:10, Julien Grall wrote:
> Hi Andre,
> 
> On 21/07/17 20:59, Andre Przywara wrote:
>> Since the GICs MMIO access always covers a number of IRQs at once,
>> introduce wrapper functions which loop over those IRQs, take their
>> locks and read or update the priority values.
>> This will be used in a later patch.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/vgic.h |  5 +++++
>>  2 files changed, 42 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 434b7e2..b2c9632 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v,
>> unsigned int virq)
>>      return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
>>  }
>>
>> +#define MAX_IRQS_PER_IPRIORITYR 4
> 
> The name gives the impression that you may have IPRIORITYR with only 1
> IRQ. But this is not true. The registers is always 4. However, you are
> able to access using byte or word.
> 
>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
> 
> I am well aware that the vgic code is mixing between virq and irq.
> Moving forward, we should use virq to avoid confusion.
> 
>> +                                 unsigned int first_irq)
> 
> Please stay consistent, with the naming. Either nr_irqs/first_irq or
> nrirqs/firstirq. But not a mix.
> 
> Also, it makes more sense to describe first the start then number.
> 
>> +{
>> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
>> +    unsigned long flags;
>> +    uint32_t ret = 0, i;
>> +
>> +    local_irq_save(flags);
>> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
> 
> I am not convinced on the usefulness of taking all the locks in one go.
> At one point in the time, you only need to lock a given pending_irq.

I don't think so. The MMIO access a guest does is expected to be atomic,
so it expects to read the priorities of the four interrupts as they were
*at one point in time*.
This issue is more obvious for the enabled bit, for instance, but also
here a (32-bit) read and a write of some IPRIORITYR might race against
each other. This was covered by the rank lock before, but now we have to
bite the bullet and lock all involved IRQs.

Cheers,
Andre.

>> +
>> +    for ( i = 0; i < nrirqs; i++ )
>> +        ret |= pirqs[i]->priority << (i * 8);
> 
> Please avoid open-coding number.
> 
>> +
>> +    vgic_unlock_irqs(pirqs, nrirqs);
>> +    local_irq_restore(flags);
>> +
>> +    return ret;
>> +}
>> +
>> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                             unsigned int first_irq, uint32_t value)
>> +{
>> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
>> +    unsigned long flags;
>> +    unsigned int i;
>> +
>> +    local_irq_save(flags);
>> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
>> +
>> +    for ( i = 0; i < nrirqs; i++, value >>= 8 )
> 
> Same here.
> 
>> +        pirqs[i]->priority = value & 0xff;
>> +
>> +    vgic_unlock_irqs(pirqs, nrirqs);
>> +    local_irq_restore(flags);
>> +}
>> +
>>  bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned
>> int irq)
>>  {
>>      unsigned long flags;
>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>> index ecf4969..f3791c8 100644
>> --- a/xen/include/asm-arm/vgic.h
>> +++ b/xen/include/asm-arm/vgic.h
>> @@ -198,6 +198,11 @@ void vgic_lock_irqs(struct vcpu *v, unsigned int
>> nrirqs, unsigned int first_irq,
>>                      struct pending_irq **pirqs);
>>  void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs);
>>
>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                                 unsigned int first_irq);
>> +void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs,
>> +                             unsigned int first_irq, uint32_t reg);
>> +
>>  enum gic_sgi_mode;
>>
>>  /*
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-08-17 17:06     ` Andre Przywara
@ 2017-08-18 14:21       ` Julien Grall
  2017-08-18 14:40         ` Andre Przywara
  0 siblings, 1 reply; 45+ messages in thread
From: Julien Grall @ 2017-08-18 14:21 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini; +Cc: xen-devel



On 17/08/17 18:06, Andre Przywara wrote:
> Hi,

Hi Andre,

> On 11/08/17 15:10, Julien Grall wrote:
>> Hi Andre,
>>
>> On 21/07/17 20:59, Andre Przywara wrote:
>>> Since the GICs MMIO access always covers a number of IRQs at once,
>>> introduce wrapper functions which loop over those IRQs, take their
>>> locks and read or update the priority values.
>>> This will be used in a later patch.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic.c        | 37 +++++++++++++++++++++++++++++++++++++
>>>  xen/include/asm-arm/vgic.h |  5 +++++
>>>  2 files changed, 42 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 434b7e2..b2c9632 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -243,6 +243,43 @@ static int vgic_get_virq_priority(struct vcpu *v,
>>> unsigned int virq)
>>>      return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
>>>  }
>>>
>>> +#define MAX_IRQS_PER_IPRIORITYR 4
>>
>> The name gives the impression that you may have IPRIORITYR with only 1
>> IRQ. But this is not true. The registers is always 4. However, you are
>> able to access using byte or word.
>>
>>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs,
>>
>> I am well aware that the vgic code is mixing between virq and irq.
>> Moving forward, we should use virq to avoid confusion.
>>
>>> +                                 unsigned int first_irq)
>>
>> Please stay consistent, with the naming. Either nr_irqs/first_irq or
>> nrirqs/firstirq. But not a mix.
>>
>> Also, it makes more sense to describe first the start then number.
>>
>>> +{
>>> +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR];
>>> +    unsigned long flags;
>>> +    uint32_t ret = 0, i;
>>> +
>>> +    local_irq_save(flags);
>>> +    vgic_lock_irqs(v, nrirqs, first_irq, pirqs);
>>
>> I am not convinced on the usefulness of taking all the locks in one go.
>> At one point in the time, you only need to lock a given pending_irq.
>
> I don't think so. The MMIO access a guest does is expected to be atomic,
> so it expects to read the priorities of the four interrupts as they were
> *at one point in time*.
> This issue is more obvious for the enabled bit, for instance, but also
> here a (32-bit) read and a write of some IPRIORITYR might race against
> each other. This was covered by the rank lock before, but now we have to
> bite the bullet and lock all involved IRQs.

A well-behaved guest would need a lock in order to modify the hardware 
as it can't predict in which order the write will happen. If the guest 
does not respect that I don't think you it is necessary to require 
atomicity of the modification.

This is making the code more complex for a little benefits and also 
increase the duration of the interrupt masked.

So as long as it does not affect the hypervisor, then I think it is fine 
to not handle more than the atomicity at the IRQ level.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter
  2017-08-18 14:21       ` Julien Grall
@ 2017-08-18 14:40         ` Andre Przywara
  0 siblings, 0 replies; 45+ messages in thread
From: Andre Przywara @ 2017-08-18 14:40 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

Hi,

On 18/08/17 15:21, Julien Grall wrote:
> 
> 
> On 17/08/17 18:06, Andre Przywara wrote:
>> Hi,
> 
> Hi Andre,
> 
>> On 11/08/17 15:10, Julien Grall wrote:
>>> Hi Andre,
>>> 
>>> On 21/07/17 20:59, Andre Przywara wrote:
>>>> Since the GICs MMIO access always covers a number of IRQs at
>>>> once, introduce wrapper functions which loop over those IRQs,
>>>> take their locks and read or update the priority values. This
>>>> will be used in a later patch.
>>>> 
>>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com> --- 
>>>> xen/arch/arm/vgic.c        | 37
>>>> +++++++++++++++++++++++++++++++++++++ 
>>>> xen/include/asm-arm/vgic.h |  5 +++++ 2 files changed, 42
>>>> insertions(+)
>>>> 
>>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index
>>>> 434b7e2..b2c9632 100644 --- a/xen/arch/arm/vgic.c +++
>>>> b/xen/arch/arm/vgic.c @@ -243,6 +243,43 @@ static int
>>>> vgic_get_virq_priority(struct vcpu *v, unsigned int virq) 
>>>> return ACCESS_ONCE(rank->priority[virq &
>>>> INTERRUPT_RANK_MASK]); }
>>>> 
>>>> +#define MAX_IRQS_PER_IPRIORITYR 4
>>> 
>>> The name gives the impression that you may have IPRIORITYR with
>>> only 1 IRQ. But this is not true. The registers is always 4.
>>> However, you are able to access using byte or word.
>>> 
>>>> +uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int
>>>> nrirqs,
>>> 
>>> I am well aware that the vgic code is mixing between virq and
>>> irq. Moving forward, we should use virq to avoid confusion.
>>> 
>>>> +                                 unsigned int first_irq)
>>> 
>>> Please stay consistent, with the naming. Either nr_irqs/first_irq
>>> or nrirqs/firstirq. But not a mix.
>>> 
>>> Also, it makes more sense to describe first the start then
>>> number.
>>> 
>>>> +{ +    struct pending_irq *pirqs[MAX_IRQS_PER_IPRIORITYR]; +
>>>> unsigned long flags; +    uint32_t ret = 0, i; + +
>>>> local_irq_save(flags); +    vgic_lock_irqs(v, nrirqs,
>>>> first_irq, pirqs);
>>> 
>>> I am not convinced on the usefulness of taking all the locks in
>>> one go. At one point in the time, you only need to lock a given
>>> pending_irq.
>> 
>> I don't think so. The MMIO access a guest does is expected to be
>> atomic, so it expects to read the priorities of the four interrupts
>> as they were *at one point in time*. This issue is more obvious for
>> the enabled bit, for instance, but also here a (32-bit) read and a
>> write of some IPRIORITYR might race against each other. This was
>> covered by the rank lock before, but now we have to bite the bullet
>> and lock all involved IRQs.
> 
> A well-behaved guest would need a lock in order to modify the
> hardware as it can't predict in which order the write will happen. If
> the guest does not respect that I don't think you it is necessary to
> require atomicity of the modification.
> 
> This is making the code more complex for a little benefits and also 
> increase the duration of the interrupt masked.
> 
> So as long as it does not affect the hypervisor, then I think it is
> fine to not handle more than the atomicity at the IRQ level.

Fair enough, I can live with that. I didn't like the added complexity
for the tiny benefit either, just wanted to retain the behaviour we had
naturally with the rank lock before.
So this is definitely true for IPRIORITYR, ICFGR and friends, but I need
to double check on ISENABLER/ICENABLER, because of the OR/AND-NOT
semantics, which allows lockless accesses from the software side. I
believe this is fine, though.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2017-08-18 14:40 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-21 19:59 [RFC PATCH v2 00/22] ARM: vGIC rework (attempt) Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
2017-08-10 15:19   ` Julien Grall
2017-08-10 15:35   ` Julien Grall
2017-08-16 16:27     ` Andre Przywara
2017-08-16 16:35       ` Julien Grall
2017-07-21 19:59 ` [RFC PATCH v2 02/22] ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq() Andre Przywara
2017-08-10 16:28   ` Julien Grall
2017-07-21 19:59 ` [RFC PATCH v2 04/22] ARM: vGIC: rename pending_irq->priority to cur_priority Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 05/22] ARM: vITS: rename pending_irq->lpi_priority to priority Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 06/22] ARM: vGIC: introduce locking routines for multiple IRQs Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter Andre Przywara
2017-08-11 14:10   ` Julien Grall
2017-08-16 16:48     ` Andre Przywara
2017-08-16 16:58       ` Julien Grall
2017-08-17 17:06     ` Andre Przywara
2017-08-18 14:21       ` Julien Grall
2017-08-18 14:40         ` Andre Przywara
2017-07-21 19:59 ` [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq Andre Przywara
2017-08-11 14:39   ` Julien Grall
2017-07-21 19:59 ` [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock Andre Przywara
2017-08-11 14:43   ` Julien Grall
2017-07-21 19:59 ` [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() " Andre Przywara
2017-08-15 10:59   ` Julien Grall
2017-07-21 19:59 ` [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() " Andre Przywara
2017-08-15 11:11   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() " Andre Przywara
2017-08-15 11:17   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper Andre Przywara
2017-08-15 12:31   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq Andre Przywara
2017-08-16 11:13   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 15/22] ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq Andre Przywara
2017-07-21 20:00 ` [RFC PATCH v2 16/22] ARM: vITS: rename lpi_vcpu_id to vcpu_id Andre Przywara
2017-07-21 20:00 ` [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq() Andre Przywara
2017-08-16 11:23   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq Andre Przywara
2017-08-16 13:40   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 19/22] ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of vcpu Andre Przywara
2017-07-21 20:00 ` [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq Andre Przywara
2017-08-16 14:32   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock Andre Przywara
2017-08-16 14:38   ` Julien Grall
2017-07-21 20:00 ` [RFC PATCH v2 22/22] ARM: vGIC: remove remaining irq_rank code Andre Przywara

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.